Skip to main content

Text mining

Definition, methods and software, and information resources for text mining.

What is text mining ... and what is not?

As of June 17, 2019, Wikipedia defines text mining as "the process of deriving high-quality information from text. High-quality information is typically derived through the devising of patterns and trends through means such as statistical pattern learning. Text mining usually involves the process of structuring the input text (usually parsing, along with the addition of some derived linguistic features and the removal of others, and subsequent insertion into a database), deriving patterns within the structured data, and finally evaluation and interpretation of the output. 'High quality' in text mining usually refers to some combination of relevance, novelty, and interest. Typical text mining tasks include text categorization, text clustering, concept/entity extraction, production of granular taxonomies, sentiment analysis, document summarization, and entity relation modeling (i.e., learning relations between named entities)." 

Text mining may generally be differentiated from qualitative data analysis of texts, which uses software such as NVivo (which is licensed by AU), QDA Miner, or Atlas.ti, in that qualitative data analysis focuses on content that the researcher already knows or has "consumed" (by having read/viewed/heard it), and now categorizes, reviews, and/or classifies the content, whereas the number and/or volume of texts involved in text mining would make that impossible and therefore requires computerized statistical and natural language processing approaches. (That is not a clearly defined boundary, however.)

Methods and software for text mining

There are essentially two approaches to text mining:

  1. Retrieving/downloading, or attaching storage containing, the corpus of texts to be "mined" to a computer, and then using locally installed software or programming languages to do that - such as WordStatSPSS Modeler Text Analytics, Python, or R. (Of these, CTRL supports the latter two, as of June 2020). American University users planning to retrieve large amounts of documents from any of the online databases the library subscribes to should be aware that the vendors of some of those databases may detect this as an unauthorized attempt to excessively download content beyond the need of typical academic research, and block access to the database to everyone at AU until the issue has been investigated. Therefore, any AU user interested in downloading more documents than would be typical for research such as finding sources to read for writing a paper should first contact the library's electronic resources management unit to discuss their project, and send the email from their american.edu account.
     
  2. Bringing code to mine/analyze texts, written in programming languages such as the aforementioned Python or R, to the platform on which the texts are stored, executing it there, and then retrieving the results. Examples of such platforms include:
  • ProQuest TDM Studio - as of Nov. 2020, AU Library does not subscribe to this service, but had a development/cooperation agreement through Nov. 10, 2020. As of Nov. 11, 2020, ProQuest offers faculty and graduate students access to TDM Studio through their Digital Research Support Program (deadline for proposal submissions: Nov. 30, 2020).
  • LexisNexis Data as a Service for Academic Research - as of Nov. 2020, AU Library does not subscribe to this service.
  • HathiTrust Research Center (HTRC) Analytics ("enables computational analysis of works in the HathiTrust Digital Library (HTDL) to facilitate non-profit research and educational uses of the collection"). Note to AU users: with a Library of Congress card, you have access to the HathiTrust Digital Library.
  • JSTOR Data for Research "provides datasets of content on JSTOR for use in research and teaching. Researchers may use DfR to define and submit their desired dataset to be automatically processed. Data available through the service includes metadata, n-grams, and word counts for most articles and book chapters, and for all research reports and pamphlets on JSTOR. Datasets are produced at no cost to researchers and may include data for up to 25,000 documents."