Skip to Main Content

Artificial Intelligence and Libraries

Chat with a Librarian

chat loading...

This content is drawn from a report authored by the AU Library's Artificial Intelligence Experimentation Working Group. You can read the groups full report covering experiments with the use of AI in library work and recommendations to library leadership in the American University Research Archive.

What are the strengths and limitations of using AI?

Overall, the AI tools tested show promise for this specific set of library tasks. Subgroups discovered that AI tools were able to enhance library workflows to a certain extent. Most of the results of experimentation found that AI was a helpful assistant for getting library faculty and staff started on a task. It could create first drafts, extract broad insights from datasets, and narrow down the focus of a task. Results of this experimentation showed that using AI can in some cases save time for library employees. This process was also a valuable learning experience in which library employees learned more about how to interact with AI to get the desired results.

However, two major limitations emerged: 1) inaccurate or unreliable output and 2) the time required for human interaction and intervention. The AI tools tested in this round of experimentation were inconsistent in whether they produced accurate output. As such, the “final” products that the AI tools created were not able to be taken at face value and required human oversight. This was found across every subgroup for almost every workflow that was researched. In some cases, errors in the AI-generated output would not result in major consequences. But for important decision-making or customer-facing information or processes, a lack of human oversight could lead to inaccurate decision making, poor usability for patrons, or a poor reflection on the library.

Additionally, in many cases using AI for a task introduced extra steps. These included preparing materials for AI, finding examples, writing context, preparing specific and detailed prompts for the tool, and then editing, reviewing, and cleaning up the output. In some cases, it took as much or more time to prepare the tool for the task and then complete the back-and-forth required to get what was needed out of the tool. At this moment in time, the amount of extra time and effort required by humans using the AI makes it inefficient for large-scale adoption. It is important to consider whether spending the time and resources necessary to work with AI provides enough benefit that it would outweigh the efficiency of existing processes.

Can working with AI reveal which tasks it is best suited to handle?

Each subgroup was successful in identifying tasks that AI was well-suited to handle and others in which it wasn’t as helpful. For example, the Business tools subgroup had success with using AI to complete a variety of administrative, writing, and editing tasks. The Ticketing & Support subgroup found that AI helped them generate new logical operators and cell functions for data sheets, merge sheets in a format-agnostic manner, and transform a large, unstructured “notes” section of the database into meaningful qualitative insights. The Data Analysis & Cleanup subgroup found AI to be a valuable partner in co-coding. On the other hand, the Archives & Metadata subgroup found that AI might not have been better than a human at creating legitimate EAD XML and LCSH headlines to prepare documents for use by the Archives.

By using AI, library faculty and staff were able to learn more about situations in which AI can serve as a valuable assistant. Through this process they also learned how to interact with AI better, develop prompts and instructions that make for better results. This concept of using AI to learn how to use it more effectively highlights the iterative, conversational process of working with this new technology.

Can Generative AI help improve daily operations at the library?

Generative AI could be a strong potential collaborator for the library, but at this stage it is too early to tell. Some tasks that the subgroups attempted streamline with AI were useful day-to-day, but the library is far from having an AI “solution” to improve organizational efficiency. There are no tasks in this current round that could run without signification human setup and human intervention. Experimenters were unable to automate tasks completely in a way that someone else in the library could run in their own department.

Perhaps the strongest argument from this round of experimentation for using AI to improve daily operations was found in the Business Tools subgroup. This group was successful in finding easy-to-use tools that provided simple ways to make administrative tasks such as note-taking and creating presentations faster. Yet overall, the conversation about how new AI insights will change library operations is ongoing and more experimentation is needed.

Can AI generate “accurate-enough” results?

This is one area where all subgroups struggled. Results from the different AI tools were unpredictably inaccurate, and those inaccuracies ranged from very minor (e.g., the Zoom note-taking tool sometimes inaccurately transcribed names), to more concerning, such as in the case of ChatGPT’s Data Analyst providing different analyses on the same dataset depending on who was using it. At this stage, all results provided by the tools available must be double-checked for errors, and users require an understanding of the original material to recognize when Generative AI tools are making errors. That said, in the end most tools that the subgroups used were able to produce accurate-enough results for a first draft, but not for a final draft. At this stage of the technology, anything that AI creates would require from further human review. This underscores the need to set standards and define the scope of AI-generated insights, as well as to find where in an AI-heavy workflow to introduce robust human interaction and quality checking.

Are the piloted tools the best enterprise AI product(s) for the intended needs?

Some of the subgroups compared tools directly in this round of experimentation to find out which AI tools were the strongest at completing the required library tasks. Many of the tools tested, including Grammarly, Claude, and ChatGPT, had strengths in some areas but not in others. There was no one tool that could “do it all” and be used for all the tasks and workflows researched by every subgroup. On the other hand, some tools were definitely not as strong as others. Copilot for PowerAutomate, used by the Data Analysis & Cleanup subgroup, wasn’t robust enough to complete the tasks requested of it without significant human assistance. Claude struggled with many tasks asked of it by the Archives & Metadata subgroup. The “best” tool is still to be determined, but for this current project, ChatGPT and its associated tools seemed to be the better product for most of the tasks researched in this round of projects.

Is an enterprise AI tool a worthwhile investment of resources at this stage?

Given the ongoing nature of AI development, an enterprise-level subscription to a Generative AI tool could be a valuable investment going forward. The remaining question would be which tool would have the most impact on the largest number of library units. Some tools, such as Zoom AI Companion, are already included in AU’s enterprise subscriptions. For some tools, such as ChatGPT and Data Analyst, all members of a subgroup were able to share a single account to conduct their experimentation, which could further save costs.

What steps can be taken to use AI ethically?

To use AI ethically, users must prioritize data privacy and bias mitigation. Protecting data privacy was done to the fullest extent possible by the subgroups through anonymizing data and choosing materials to input that did not contain personally identifiable information. Some tasks, like linking faculty data with publications, required identifiable information in order to achieve desired results from the output.

Another way in which the AU Library can use AI ethically is to be aware of and attempt to offset AI’s ability to perpetuate bias as much as possible. Users of AI need to remember that AI reflects the data it is trained on, which is human data with human biases (Davis, 2023). Upon adoption of any enterprise-wide AI tool, the library must proactively identify and address these biases by critically evaluating outputs, watching for instances of bias, and understand how what goes into the model can shape its results. If the library adopts AI, it will also need to adopt a purposeful commitment to deploying these technologies in ways that respect worker privacy and protect the most vulnerable.

Works Cited

Davis, V. (2023, April 20). Equity in a World of Artificial Intelligence. WCET. https://wcet.wiche.edu/frontiers/2023/04/20/strongequity-in-a-world-of-ai/