This content is drawn from a report authored by the AU Library's Artificial Intelligence Exploratory Working Group. You can read the groups full report covering the current state of AI and making recommendations to library leadership in the American University Research Archive.
The Association of College and Research Libraries (ACRL) created the Framework for Information Literacy for Higher Education (The Framework), which was accepted by the board in 2016 (ACRL Board, 2015). Since that time, The Framework guides the profession’s approach to information literacy instruction.
Fifteen years prior to The Framework, ACRL developed and published Information Literacy Competency Standards for Higher Education (ACRL Board of Directors, 2000). This earlier approach focused more on skill-development rather than on flexible critical thinking concepts. Recognizing the rapid and dynamic shifts in the information ecosystem, The Framework set forth a series of six interdependent fundamental concepts about the information ecosystem, emphasizing various types of critical thinking in relation to information. The approach to skill development in the Standards was reoriented to be in service of these essential information literacy concepts (ACRL Board, 2015).
The Framework assumes a dynamic growth trajectory. For each “frame,” both “knowledge practices” and “dispositions” are specified. Knowledge practices are the ways in which someone can integrate their developing understanding of a specific frame into activity. The dispositions identify the necessary internal orientation a learner must have in relation to a particular frame to achieve mastery. In this way, both activity and orientation are incorporated into information literacy and are given equal weight.
For example, in relation to the conceptual frame, “Authority is Constructed and Contextual,” one way a learner would demonstrate mastery through activity is to, “define different types of authority, such as subject expert (e.g. scholarship), societal position (e.g. public office or title), or special experience (e.g. participating in a historic event).”(ACRL Board, 2015) One of the necessary dispositions for a learner to develop in relation to this frame is to “develop and maintain an open mind when encountering varied and sometimes conflicting perspectives (ACRL Board, 2015).”
The six frames are:
The following sections will consider the implications of AI within the context of each of the six Frames. Fundamentally, we suggest that The Framework provides the appropriate conceptual underpinning to address Artificial Intelligence in information literacy. While AI is disruptive, The Framework was designed to accommodate disruptive technologies within the information ecosystem. Our analysis demonstrates that the paradigm of The Framework is sufficiently conceptual and flexible to allow consideration of the potential use cases for AI and to focus on mitigating potential harms.
“Information resources reflect their creators’ expertise and credibility and are evaluated based on the information need and the context in which the information will be used. Authority is constructed in that various communities may recognize different types of authority. It is contextual in that the information need may help to determine the level of authority required (ACRL Board, 2015).”
This frame encourages users to evaluate the authority of creators in order to develop deep contextual knowledge of the source and its place in scholarship. Students often begin by evaluating the source by creation type and progress toward lateral reading techniques which investigate the source and its author in more detail. This frame recognizes that what is authoritative in one context may not be authoritative in another. Seeking authoritative voices appropriate to the context of inquiry is at the heart of this frame and is central to scholarly discourse.
“Authority is Constructed and Contextual” is deeply relevant to generative AI. Because some generative AI tools, such as ChatGPT, create content without attribution, the authority of the work can be invisible or hard to define. Novice researchers may not be able to differentiate between content created by AI and content created by a person, which will interrupt efforts to encourage engagement with the concepts of the first frame. Library Instruction will need to expand to include the ethical and procedural implications of generative AI.
There are multiple types and purposes of AI. Their appropriate use is also contextual. So, while a general-use generative chatbot might create hallucinated information, some more specialized systems of AI developed for specific research projects might be appropriate within their context.
There are different disciplinary norms emerging for the use of AI in research by expert researchers that should be taught within the disciplines. This section is specifically concerned with generative AI. Generative AI is creating content that seems authoritative, without expertise or credibility. While this may seem novel and even useful, it is fundamentally destructive for knowledge development in novice users. An AI tool that creates primary source materials obfuscates expertise, position, and experience. Authors and creators may have their works used without attribution, which prohibits value attribution for their intellectual property. Recognition of the human creator’s authority is muddled by hallucinated source materials. Libraries increasingly receive requests for sources with the names of real scholars and from existing journals, but with made-up titles. This adds to the workload of library employees and frustrates scholars in search of genuine information. Novice users conflate the output of generative AI with authoritative source material, which builds gaps into their foundational knowledge. Currently, the standards for the use of AI in academic integrity are emergent. Experienced users integrate generative AI content through a process of evaluation and amendment (hopefully with citation). Attribution of AI-generated content is crucial for our own understanding and to support genuine scholarship.
In order to develop their information literacy practices, users will need to investigate the authority of information in a more complex way than before. Source materials with anonymous authors exist in our current environment. We encourage students to find other sources of information in order to verify accuracy of unnamed source materials. With AI, machine-mediated content will be integrated into source materials that include named authors. If the machine-mediated content is not cited, then potential misinformation may be unidentified and remain uninvestigated. Novice users rarely go beyond assessing author credentials to assess credibility, so if researchers use unreliable AI, a novice could be misled and it could poison the well of research. While this was true before the advent of generative AI, the possibility of this happening has increased. In order to mitigate the risks of misinformation generated by AI, students should be able to:
Generative AI does not have authority at this stage, so we cannot build our scholarship using this technology. Expert users should find ways to ethically incorporate AI into their authorship, including clearly citing any use of generative AI in their works, or where appropriate in the methodology section.
“Information in any format is produced to convey a message and is shared via a selected delivery method. The iterative processes of researching, creating, revising, and disseminating information vary, and the resulting product reflects these differences (ACRL Board, 2015).”
Another central tenet to scholarship is that information created using different processes is valued differently within each discipline. This frame encourages investigation of source creation to determine how and why a source was created, to evaluate the appropriateness of information for a user’s needs, and to think critically about the medium and the message. “Information Creation is a Process” comes up in Information Literacy Instruction when we discuss peer-reviewed source materials, as we compare sources published independently on the web with sources published through traditional scholarly processes, and as scholars develop individual research processes. Artificial Intelligence and algorithms are already shaping how we access information in databases and online. Scholarship on the suppression of and promotion of information by algorithms notes that they affect our ability to find accurate and relevant information. Additionally, AI processes are often invisible or proprietary, so users cannot determine what steps went into the production of a source retrieved by or generated by AI.
Generative AI is a new information creation process with proprietary or poorly defined information creation mechanisms. This presents problems for both expert and novice researchers as we look for ways to verify information before including it in our own scholarship. AI that translates or generates materials can replicate patterns but has not yet mastered the contextual meaning and cultural associations that humans create with language. AI-generated materials are not yet governed by professional, ethical, or disciplinary norms or regulations, therefore information processes that were more clearly defined before AI have become muddled. AI materials are produced at a faster pace than human-generated materials, which might overwhelm the media environment or (re)produce confirmation biases.
At this stage, AI-generated content requires more work to evaluate. As researchers encounter materials that have been created using unknown processes, methodological analysis is undermined. In order to develop researchers well-versed in evaluation skills related to information creation processes, we must encourage them to:
Information creation processes for algorithms and artificial intelligence are unclear or are protected by proprietary processes at this time. Until regulations and protections can be enacted, users must perform additional analyses for machine-generated and human-generated content to uncover the process of creation. Traditional methodologies, scholarly practices (like citation and stylistic choices), and knowledge development are important skills for all scholars. As AI continues to evolve, these practices will be needed to continue to assess how and why information was created.
“Information possesses several dimensions of value, including as a commodity, as a means of education, as a means of influence, and as a means of negotiating and understanding the world. Legal and socioeconomic interests influence information production and dissemination (ACRL Board, 2015).”
Frame Three addresses the fact that information has value and can be used to educate, influence, and negotiate. It recognizes that information is shaped by the situation, time, and culture in which it is created and distributed. As such, it interrogates the ways in which certain voices are marginalized and how information can be used to create change. It also investigates the incentive structures inherent in our information ecosystem and the way in which information and information consumers are commodified.
Important user Knowledge Practices and Dispositions for Frame Three center around crucial research skills such as citation and attribution of the ideas of others, making informed choices around copyright and fair use, and making decisions on publication and sharing of information. Information literate learners value the skills, time, and effort needed to create new knowledge and demonstrate an understanding of how information is influenced by social constructs. They also examine who may or may not be included in the information being shared and respect the importance of access to high-quality information for all.
We find ourselves at a moment in time where it’s becoming clear that information creation will increasingly rely on AI tools. This practice can compound already existing concerns related to the creation, accuracy, and control of information. Information generated by AI can in certain cases look very similar to information generated by humans, especially as current generative AI tools are improved over time. The ability to evaluate who is creating information, how they’re creating it, and for what purposes will become even more important as people navigate the information that is presented to them every day.
In addition to analyzing who is creating information and why, knowledge practices in this frame involve examining equitable access to information. Many current generative AI tools offer both paid and free versions, with the paid option enabling users to produce higher-quality information and tackle more complex tasks. This cost poses a barrier to potential users, and uneven access to generative AI tools can further social inequalities. Because information has value, those controlling its creation also command value. The skills required for effective use of generative AI tools will also soon have value in our society. Unequal access to these tools can result in disparities in the ability to obtain or produce the highest quality information. Individuals with access, especially to paid versions, can develop better skills in knowledge creation using AI. They will also have the potential to increase their productivity by using tools that can help them create and synthesize information quickly. Those without access lack these same opportunities, and lack of practice with the tools can hinder their ability to recognize the quality of AI-generated output.
Lastly, generative AI tools also muddle issues of copyright, privacy, citation, and attribution. It is not yet common practice for users of these tools to cite their use in the creation of information products, whether the tool was used as an assistant during the information creation process or if the tool created the whole product. In addition, many of these tools are using content originally created by humans to generate new information without permission of the original creators. At the time of this writing, the U.S. Copyright office has made a statement saying that AI-generated content is not copyrightable; however, this legal situation may be in flux. With these issues combined, AI will likely exacerbate concerns about privacy and the information consumer as the product, as long as personal information and behavior are used to continue training AI products.
As generative AI tools are introduced and evolve, how students learn about the value, access, and use of information produced by others will need to adapt. Integrating lessons about AI-generated information and its challenges into existing information literacy curricula will be crucial. As they learn how to evaluate the information that they come across every day, students will need to know how to:
Instructors of Information Literacy should ensure that students are also taught to use AI in an informed and responsible way, enabling them to make informed choices about tools and situations in which it can either benefit or harm them. This will prepare them to contribute responsibly to the growing body of knowledge produced with AI tools, protecting broad access to this new information and providing proper attribution for clarity.
“Research is iterative and depends upon asking increasingly complex or new questions whose answers in turn develop additional questions or lines of inquiry in any field (ACRL Board, 2015).”
The fourth frame in the ACRL Framework focuses on the iterative nature of the research process and the scholarly conversation. Research is meant to build upon the knowledge that has come before, ask increasingly complex questions, and involve collaboration among experts to expand the knowledge in their field. Information Literacy instruction within this frame addresses each step of the research process and how to go about the practice of scholarly inquiry.
Knowledge Practices in this frame further a learner’s proficiency in each step of the research process, from formulating research questions to drawing evidence-based conclusions. Dispositions for an information literate person in this frame include respecting research as an open-ended exploratory process, valuing intellectual curiosity, seeking multiple perspectives, and remaining critical of research done by others and by themselves.
One connection between the concept of Research as Inquiry and AI is clear: how can generative AI tools enhance or detract from developing research skills? Generative AI tools can help students new to the research process think through options for their research projects. Generative AI tools can be particularly helpful for students who need assistance with:
However, AI also presents significant challenges in terms of the research process. Firstly, instances of false information and inaccurate citations for nonexistent research projects have been reported with tools like ChatGPT (Coffey, 2023). Therefore, students who wish to use such tools as “research assistants” must also critically evaluate and monitor the information presented by the tool for weakness and inaccuracies. This aligns with one of this frame’s dispositions: keeping an open mind and never forgetting to take a critical stance. Another issue with relying too heavily on generative AI tools for finding previous research is that each tool has bias, as they are based on human language and behavior. Students may need to recognize and resist a tool’s bias to effectively seek diverse research perspectives instead of relying only on what is fed to them by a tool. Lastly, ethical and legal guidelines must be followed when gathering and using any information provided by AI tools.
Information Literacy curricula can seamlessly adapt this frame to incorporate information on how generative AI tools can assist students with beginning their own research process. A balanced approach between the possibilities and the challenges ensures that students can cultivate a critical, ethical, and responsible approach to utilizing these tools instead of relying on them to complete their research.
In learning how to responsibly use AI tools as part of their research process, students will need to know how to:
Frame Four has a lot of potential for helping students learn how to navigate a complicated research process ethically and responsibly, while showing them how to do so using AI tools.
“Communities of Scholars, researchers, or professionals engage in sustained discourse with new insights and discoveries occurring over time as a result of varied perspectives and interpretations (ACRL Board, 2015).”
The knowledge practices associated with this frame emphasize evaluating the way scholarly discourses evolve through scholars interacting with each other via scholarly products. This includes correctly situating one’s own work within relevant scholarly conversations and understanding how disparate pieces of the discourse contribute to specific academic disciplines.
The relevant dispositions focus on identifying and responsibly joining appropriate conversations as a knowledge-creator, while developing a clear appreciation that conversations are ongoing. Researchers enter these conversations in media res and must suspend judgement until understanding relevant conversational contexts. Researchers must also grasp that each conversation likely has specialized language associated with it that participants must learn to fully engage. Part of being a responsible participant is valuing the contributions of others in the conversation.
One of the potentially powerful uses of AI is to analyze massive amounts of data. Text and data mining could be used to analyze changes in scholarly discourse over time. Some of the use-cases for research are emergent as the AI is improving but will likely be part of the research landscape in the near to medium term. Right now, most of these potentials are more appropriate for those who are already experts in their domains rather than learners. Some examples are:
A common use of generative AI is to provide summaries of larger works or multiple works. Summarizing may be useful as a supplement, but there are several risks relevant for this frame.
A particular danger associated with this frame is that AI-generated content (that may or may not be accurate), could successfully masquerade as the work of a scholar or be associated with an institution. There are numerous risks associated with this possibility. There is the risk to an individual scholar’s reputation (of the type associated with deep fakes). There is the risk, even if it is not associated with a real person or institution, that it can be accepted as real and included in the scholarly conversation, poisoning the well of the conversation if the data is hallucinated.
The question of how to credit AI in one’s work, so that it is clear to other conversational partners when and how it was used will be essential to maintain the integrity of the scholarly conversation. Such standards, as well as mechanisms for citation are emergent. The scholarly world will need to carefully consider how to do this. Part of the purpose of citations is to allow future researchers who are joining the conversation to go back to prior evidence and investigate the sources used in argumentation. However, AI is constantly evolving, and that type of dynamism ensures that the tool that was used in the past is not the same in the present, much less the future. Therefore, it cannot assure replicability of results, unless the tool changes were to be made transparent and reproducible. (A comparable problem in statistical analysis would be if a newer version of (for example) SPSS produced different analytical results from the same data using the same statistical measure.) Scholarly disciplines will need to grapple with its role, when and how it is permissible to use, and what type of documentation is required to represent its contribution in the scholarly record.
All students will need to understand the following:
For more advanced researchers:
“Searching for information is often nonlinear and iterative, requiring the evaluation of a range of information sources and the mental flexibility to pursue alternate avenues as new understanding develops (ACRL Board, 2015).”
The knowledge practices associated with this frame emphasize the process of searching for information: identifying parties likely to create needed information, determining an initial scope of information needed, adjusting approaches, and understanding when an information need has been met. A number of specific knowledge practices about searching, such as Boolean searching, are likely to become less relevant in the intermediate future as a result of AI. While some of the traditional search techniques are going to diminish in significance, there will be the evolution of new search strategies, such as prompt engineering, that will grow in importance.
The dispositions associated with this frame remain relevant. They include mental flexibility, persistence in the face of challenges, creativity, being able to assess relevance and value of sources, and both knowing when and being willing to ask for expert help.
AI is now integrated into many of our searching tools in ways that are often opaque and over which we have no control. It is likely this trend will continue and may make some of the knowledge practices in this frame that focus on methods of searching less relevant. With AI that is being integrated into our search tools, we should strive to understand, as much as possible, how these tools work. For example, is the tool personalized, revealing different results for different people based on prior searches? Is it using some metric as a “quality” indicator to raise ranking of certain pieces? What are the characteristics that impact the ordering of results?
While AI requires us to pay far more attention to how results are ordered and why, many of the essential aspects of this frame remain relevant regardless of the tool that is used. Information literacy instruction still must strongly emphasize helping students develop the ability to:
Also, students will need to be aware of the importance and meaning of the way search tools, including the library’s, order results. They need to understand what values are encoded in the algorithms that present materials in a particular order, whether they are personalized, and the implications.