Skip to Main Content

Artificial Intelligence and Libraries

chat loading...

This content is drawn from a report authored by the AU Library's Artificial Intelligence Exploratory Working Group. You can read the groups full report covering the current state of AI and making recommendations to library leadership in the American University Research Archive.

Equitable Access to AI in Libraries

Beyond the call to equitable access in an institution’s mission or professional body’s guiding principles, libraries have multiple practical and ethical equity considerations to make when investigating and introducing AI tools to library users, including but not limited to accessibility, perpetuation of bias and existing inequities, and uneven exposure to AI.

Accessibility

One of the hopes for AI tools is furthering equity through improved access to educational resources. Davis notes examples where generative AI can assist users with dyslexia to improve understanding of written communication, and students and scholars turning to generative AI to assist with the research process (Davis, 2023). AI tools can benefit students with disabilities by “providing executive function support,” and generative AI chatbots can assist neurodivergent learners and learners with low self-confidence by offering ways to experiment with language without judgement from peers (D’Agostino, 2023). AI tools can improve access “providing multilingual support and accommodating different learning styles, these assistants ensure that all users, regardless of their background or abilities, can access library services and resources (Hodonu-Wusu, 2024).”

Perpetuating Bias and Existing Inequities

The risks posed by AI to perpetuate bias and further existing inequities are well documented and discussed previously in this report. Here we will further expand on those concerns as they relate to libraries and academia. Hodonu-Wusu urges libraries to reflect on the risk of perpetuating inequity and to actively account for equitable access in AI tools from conception to point of delivery. Libraries should seek input from users, particularly marginalized communities, and collaborate with developers and vendors of AI to proactively address historical biases in the training data used to develop AI (Hodonu-Wusu, 2024).

Libraries and librarians will play a vital role in addressing bias in AI tools by ensuring, “the diversity and representativeness of data sets and help researchers employ appropriate methods to eliminate bias (Michalak, 2023).” Michalak further argues that librarians are well suited to developing resources and programs to train others to navigate the ethical concerns around bias and historical inequities. In acknowledging the significant role of academic libraries in addressing these issues, Michalak introduces the Academic Librarian Framework for Ethical AI Policy Development,” which centers the unique skills and techniques academic librarians bring to the implementation of ethical and equitable AI (Michalak, 2023).

Algorithmic bias – described as “systemic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others,” and occurring, “when an algorithm produces results that are systemically prejudiced due to erroneous assumptions in the machine learning process (Davis, 2023)” – is the result of generative AI using large sets of data scraped from the open Internet according to Davis. These large, scraped datasets tend to be over-representative of certain advantaged groups and are rife with biased information (Davis, 2023).

As libraries explore and deploy AI within their communities, it is imperative to continuously account for and communicate about the bias and inequity risks posed by AI and the active steps taken to address these concerns.

Uneven Exposure to AI

Student and faculty exposure to AI varies at the individual level and is shaped by access to and comfort with pre-AI technology, exposure to AI in prior academic settings, and the ability to afford the most advanced AI tools. Libraries must account for each user’s unique relationship with the digital divide and the ability to afford access to high quality information and technology (American Library Association, 2015).

Currently, academia is navigating inequity of exposure to AI caused by the uneven adoption of AI policies and outright bans on generative AI across the education system (D’Agostino, 2023). This inequity is further exacerbated by unequal access to physical technology. In 2021, a Pew Research Center found that approximately 15% of adults only accessed the internet on smartphones and that this percentage jumped to 28% for adults aged 18-29 years. Furthermore, this access varies across demographic lines with Hispanic young adults 150 % more likely and Black young adults 70% more likely than White young adults to be smartphone-only internet users. Due to slower internet connections on mobile devices and limitations of hardware, users' ability to deal with sophisticated prompts and lengthy AI-generated results can be negatively impacted (Davis, 2023). With the most powerful AI tools paywalled, many students and faculty will struggle with access as this barrier adds one more financial stressor to their lives (D’Agostino, 2023). All of this is particularly concerning to academic libraries as it can prevent users from fully accessing the benefits of AI tools, engaging with learning and research, and achieving their full potential.

Uneven exposure to AI is shaped by prior academic experiences and technology access and by the characteristics of specific academic disciplines and industrial sectors. Eastwood discusses the discrepancy in adoption rates of AI across various sectors, with Education being one of the most prolific adopters of AI behind Manufacturing, Information, and Healthcare (Eastwood, 2024). Von Garrel and Mayer surveyed students in Germany and found significant differences between academic disciplines in both frequency of AI use and AI methods, with STEM (science, technology, engineering, and math) students the most likely to utilize AI in their studies (von Garrel & Mayer, 2023). A 2023 De Gruyter report included a survey of STEM and HSS (humanities and social sciences) scholars that found STEM scholars were 50% more likely to be very familiar with AI compared to HSS scholars, and more than twice as likely to use AI to draft code compared to HSS scholars. HSS scholars had a much higher need for AI based language translation whereas STEM scholars used AI to simplify complex concepts (Abbas & Hinz, 2023).

Accessibility, exacerbating bias and existing inequities, and uneven exposure to AI are just some of the many challenges posed to libraries committed to equitable access to AI tools. These opportunities and challenges should be accounted for as libraries continue to engage, adopt, and instruct on AI tools.