This content is drawn from a report authored by the AU Library's Artificial Intelligence Exploratory Working Group. You can read the groups full report covering the current state of AI and making recommendations to library leadership in the American University Research Archive.
Artificial intelligence research as a whole is attempting to make machines capable of mimicking human-like functions. Categorization of AI depends on how and the degree to which a particular system can emulate humans. In general, systems that can perform in the most human-like ways are considered more “evolved” forms of AI.
Three large categories for AI are: Narrow or Weak AI, General AI, and Super AI. Narrow AI is trained to a very specific set of circumstances and cannot do tasks besides those that it was trained to do. It cannot select its own application of a skill to a task. General AI is more human-like and can do a variety of normal human tasks, choosing when to apply different strategies and tasks just like a human would. Super AI has an intelligence and capacity that exceeds that of all humans. At this stage in time, only Narrow AI exists.
Within the larger category of Narrow AI, there are four types of basic functionalities that AI systems can do: reactive, limited memory, theory of mind, and self-aware. All AI applications that humans currently have developed and use fit into the first two categories; the last two categories only currently exist as concepts at this stage (Jones, 2023).
Reactive AI systems are exactly as the name states: they react to different kinds of stimuli. Reactive systems cannot “remember” or “learn” from previous experience, and therefore can’t use previous experience to inform their reactions in the present. Reactive machines are best at responding to inputs and will not change or improve their reactions over time. An example of a reactive machine is Deep Blue, a chess-playing AI system. Deep Blue knows the rules of chess and can apply them to the game, but does not “learn” its opponents’ chess-playing style or adapt to it.
Limited memory AI systems can do everything that reactive systems can do, but they can also learn from previous experience to make decisions and adapt reactions in the future. Any AI system that uses deep learning is trained on large volumes of human data. Limited memory systems can use their experience with this training data as a reference to respond to future prompts. Its accuracy should increase over time as it “learns” and adapts. Currently, these AI systems can only perform specific tasks based on the prompts it receives. They cannot do more than what they’re programmed to do, and the quality of the response depends on the quality of the prompt given by a human. Familiar types of Limited Memory AI include chatbots, image recognition tools, virtual assistants, and self-driving vehicles.
Within limited memory systems, we can break down the classification even further based on the tasks and skill-level of the AI tool. Generative AI, such as ChatGPT, focuses on creating new content that has never existed before. Generative AI uses neural networks and machine learning to mimic human production. It is not designed to be factual, but rather to be a processor that mimics what humans do in the areas of communication, language, and art (among others). The product of generative AI at this stage is not vetted for truth or accuracy and can contain significant bias and errors. Communicative AI, such as Siri, simulates human conversation and can interact with humans in a conversation in a human-seeming way. It searches and gathers already existing information for the user based on the prompt it receives. This type of AI does not create anything new, despite the appearance of having a conversation with the user. Communicative AI uses natural language processing wherein computers can process and imitate human language. They’re trained on large amounts of authentic human language data. Predictive AI focuses on predicting or forecasting future events. It uses statistical models and analysis to identify patterns, anticipate future events, and make predictions. The foundations of predictive AI are statistical models and machine learning.
The image below gives us an example by breaking down the classification of ChatGPT, a popular generative AI chatbot (Zwingmann, 2023). ChatGPT is a form of Generative AI based on Large Language Models (LLM), which then fits into the larger classification of Narrow AI. Not all generative AI is text-based and will use LLMs, but those that are built to mimic human language do so by learning and later replicating statistical relationships between texts within large datasets of human language examples.
(Zwingmann, 2023)
While the rapid development of LLM such as GPT-4 is what is currently dominating discussions, Expert Systems are another type of Narrow AI that may be either reactive or limited memory, depending on the program. While the term “Expert Systems” seems to be used variably, they often incorporate machine learning but are trained on much more specific datasets and have additional rules built into their coding meant to mimic the cognitive process of subject domain experts.
Abbas, R., & Hinz, A. (2023). Cautious but Curious: AI Adoption Trends Among Scholars. De Gruyter. https://blog.degruyter.com/wp-content/uploads/2023/10/De-Gruyter-Insights-Report-AI-Adoption-Trends-Among-Scholars.pdf
ACRL Board. (2015, February 9). Framework for Information Literacy for Higher Education [Text]. Association of College & Research Libraries (ACRL). https://www.ala.org/acrl/standards/ilframework
ACRL Board of Directors. (2000). Information Literacy Competency Standards for Higher Education. The Association of College and Research Libraries. http://hdl.handle.net/11213/7668
American Library Association. (2004). Equity of Access. American Libraries, 35(6), 1–14.
American Library Association. (2007, April 19). Key Action Areas. http://www.ala.org/aboutala/missionpriorities/keyactionareas
American Library Association. (2015, October 23). Access to Library Resources and Services. http://www.ala.org/advocacy/intfreedom/access
American University. (2018). American University Glossary of Key Terms. https://www.american.edu/president/diversity/inclusive-excellence/upload/key-terms_180205_f.pdf
American University Library. (n.d.). Vision, Mission, and Values. Retrieved April 4, 2024, from https://www.american.edu/library/about/vision.cfm
American University Library. (2023). American University Library Strategic Plan 2023-2026. https://www.american.edu/library/about/strategic-plan/upload/au-library-strategic-plan_2023-2026.pdf
Association of Research Libraries. (2024). Research Libraries Guiding Principles for Artificial Intelligence (p. 2). Association of Research Libraries. https://doi.org/10.29242/principles.ai2024
Broom, D. (2023, August 18). Who owns the song you wrote with AI? An expert explains. World Economic Forum. https://www.weforum.org/agenda/2023/08/intellectual-property-ai-creativity/
Brumfiel, G. (2023, October 12). New proteins, better batteries: Scientists are using AI to speed up discoveries. NPR. https://www.npr.org/sections/health-shots/2023/10/12/1205201928/artificial-intelligence-ai-scientific-discoveries-proteins-drugs-solar
Can Xi Jinping control AI without crushing it? (2023, April 18). The Economist. https://www.economist.com/china/2023/04/18/can-xi-jinping-control-ai-without-crushing-it
Coffey, L. (2023, November). AI, the Next Chapter for College Librarians. Inside Higher Ed. https://www.insidehighered.com/news/tech-innovation/libraries/2023/11/03/ai-marks-next-chapter-college-librarians
Consumer Financial Protection Bureau. (2023, June 6). Chatbots in consumer finance. Consumer Financial Protection Bureau. https://www.consumerfinance.gov/data-research/research-reports/chatbots-in-consumer-finance/chatbots-in-consumer-finance/
D’Agostino, S. (2023, June 5). How AI Tools Both Help and Hinder Equity. Inside Higher Ed. https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2023/06/05/how-ai-tools-both-help-and-hinder-equity
Davis, V. (2023, April 20). Equity in a World of Artificial Intelligence. WCET. https://wcet.wiche.edu/frontiers/2023/04/20/strongequity-in-a-world-of-ai/
Eastwood, B. (2024, February 7). The who, what, and where of AI adoption in America. Ideas Made to Matter. https://mitsloan.mit.edu/ideas-made-to-matter/who-what-and-where-ai-adoption-america
European Union. (2016, April 27). General Data Protection Regulation. https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX%3A32016R0679
Frueh, Sara. (2023, November 6). How AI is shaping scientific discovery. National Academies. https://www.nationalacademies.org/news/2023/11/how-ai-is-shaping-scientific-discovery
Generative AI and libraries: 7 contexts. (2023, November 12). LorcanDempsey.Net. https://www.lorcandempsey.net/generative-ai-and-libraries-7-contexts/
Google. (2024, May 22). Gemini Advanced. https://gemini.google.com/advanced
HBR. (2023). AI Won’t Replace Humans—But Humans With AI Will Replace Humans Without AI. Harvard Business Review. https://hbr.org/2023/08/ai-wont-replace-humans-but-humans-with-ai-will-replace-humans-without-ai
Hodonu-Wusu, J. O. (2024). The rise of artificial intelligence in libraries: The ethical and equitable methodologies, and prospects for empowering library users. AI Ethics. https://doi.org/10.1007/s43681-024-00432-7
How to worry wisely about artificial intelligence. (2023, April 22). The Economist. https://www.economist.com/leaders/2023/04/20/how-to-worry-wisely-about-artificial-intelligence
Hu, K. (2023, February 2). ChatGPT sets record for fastest-growing user base—Analyst note. Reuters. https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/
Huang, L. (2023). Ethics of Artificial Intelligence in Education: Student Privacy and Data Protection. Science Insights Education Frontiers, 16(2), 2577–2587. https://doi.org/10.15354/sief.23.re202
Jones, B. M. (2023, September 28). How Generative AI Tools Help Transform Academic Research. Forbes. https://www.forbes.com/sites/beatajones/2023/09/28/how-generative-ai-tools-help-transform-academic-research/
Kretschmer, M., Meletti, B., & Porangaba, L. H. (2022). Artificial intelligence and intellectual property: Copyright and patents—a response by the CREATe Centre to the UK Intellectual Property Office’s open consultation. Journal of Intellectual Property Law & Practice, 17(3), 321–326. https://doi.org/10.1093/jiplp/jpac013
Liu, F., Budiu, R., Zhang, A., & Cionca, E. (2023, October 1). ChatGPT, Bard, or Bing Chat? Differences among 3 generative-AI bots. Nielsen Norman Group. https://www.nngroup.com/articles/ai-bot-comparison/
Marcus, Gary. (2023, December 10). AI’s Jurassic Park moment [Substack]. Marcus on AI. https://garymarcus.substack.com/p/ais-jurassic-park-moment
McKendrick, J. (2022). Who Ultimately Owns Content Generated By ChatGPT And Other AI Platforms? Forbes. https://www.forbes.com/sites/joemckendrick/2022/12/21/who-ultimately-owns-content-generated-by-chatgpt-and-other-ai-platforms/
McSherry, C. (2017, February 13). Publishers Still Fighting to Bury Universities, Libraries in Fees for Making Fair Use of Academic Excerpts | Electronic Frontier Foundation. EFF: Electronic Frontier Foundation. https://www.eff.org/deeplinks/2017/02/publishers-still-fighting-bury-universities-libraries-fees-making-fair-use
Michalak, R. (2023). From Ethics to Execution: The Role of Academic Librarians in Artificial Intelligence (AI) PolicyMaking at Colleges and Universities. Journal of Library Administration, 63(7), 928–938. https://doi.org/10.1080/01930826.2023.2262367
Microsoft and OpenAI extend partnership. (2023, January 23). The Official Microsoft Blog. https://blogs.microsoft.com/blog/2023/01/23/microsoftandopenaiextendpartnership/
Naruto v. Slater (9th Cir. April 23, 2018).
New York Times v. Microsoft et Al., 1:23-cv-11195 (S.D.N.Y.).
Nussey, S., & Kelly, T. (2023, July 3). Japan leaning toward softer AI rules than EU, official close to deliberations says. Reuters. https://www.reuters.com/technology/japan-leaning-toward-softer-ai-rules-than-eu-source-2023-07-03/
Palace, V. M. (2019). What If Artificial Intelligence Wrote This: Artificial Intelligence and Copyright Law Note. Florida Law Review, 71(1), 217.
Pelletier, K., Robert, J., Muscanell, N., McCormack, M., Reeves, J., Arbino, N., Grajek, S., Birdwell, T., Liu, D., Mandernach, J., Moore, A., Porcaro, A., Rutledge, R., & Zimmern, J. (2023). 2023 EDUCAUSE horizon report: Teaching and learning edition (978-1-933046-18–1; 2023 EDUCAUSE Horizon Report, p. 55). EDUCAUSE. https://library.educause.edu/-/media/files/library/2023/4/2023hrteachinglearning.pdf?la=en&hash=195420BF5A2F09991379CBE68858EF10D7088AF5
Rao, J., Gao, S., Mai, G., & Janowicz, K. (2023). Building Privacy-Preserving and Secure Geospatial Artificial Intelligence Foundation Models (Vision Paper). Proceedings of the 31st ACM International Conference on Advances in Geographic Information Systems, 1–4. https://doi.org/10.1145/3589132.3625611
Recker, Jane. (2022, March 16). A new AI can help historians decipher damaged Ancient Greek texts. Smithsonian Magazine. https://www.smithsonianmag.com/smart-news/a-new-ai-can-help-historians-decipher-damaged-ancient-greek-texts-180979736/
Robert, J., & Muscanell, N. (2023). 2023 EDUCAUSE Horizon Action Plan: Generative AI.
Roberts, Molly. (2023). AI is forcing teachers to confront an existential question. The Washington Post.
Satariano, A. (2023, December 8). E.U. Agrees on Landmark Artificial Intelligence Rules. The New York Times. https://www.nytimes.com/2023/12/08/technology/eu-ai-act-regulation.html
Seah, S. (2023). Responsible practices for responsible libraries: The role of libraries in a world of generative AI.
Service, R. F. (2020). “The game has changed.” AI triumphs at protein folding. Science, 370(6521), 1144–1145. https://doi.org/DOI: 10.1126/science.370.6521.1144
Siemens, G., Marmolejo-Ramos, F., Gabriel, F., & Etc. (2022). Human and artificial cognition. Computers and Education: Artificial Intelligence, 3. https://www.sciencedirect.com/science/article/pii/S2666920X22000625
Smolansky, A., Cram, A., Raduescu, C., Zeivots, S., Huber, E., & Kizilcec, R. (2023, July). Educator and Student Perspectives on the Impact of Generative AI on Assessments in Higher Education. Proceedings of the Tenth ACM Conference on Learning@Scale ‘23. https://doi.org/10.1145/3573051.3596191
Spector, C. (2023, October 31). What do AI chatbots really mean for students and cheating? Stanford Graduate School of Education. https://ed.stanford.edu/news/what-do-ai-chatbots-really-mean-students-and-cheating
Turk, V. (2023, October 10). How AI reduces the world to stereotypes. Restofworld.Org. https://restofworld.org/2023/ai-image-stereotypes/
Verma, P. (2023, December 17). The rise of AI fake news is creating a ‘misinformation superspreader.’ The Washington Post.
Villegas-Ch, W., & García-Ortiz, J. (2023). Toward a Comprehensive Framework for Ensuring Security and Privacy in Artificial Intelligence. Electronics, 12(18), Article 18. https://doi.org/10.3390/electronics12183786
von Garrel, J., & Mayer, J. (2023). Artificial Intelligence in studies—Use of ChatGPT and AI-based tools among students in Germany. Humanities and Social Sciences Communications, 10(1). https://doi.org/10.1057/s41599-023-02304-7
Weidinger, L., Rauh, M., Manzin, A., & Marchal, N. (2023). Sociotechnical Safety Evaluation of Generative AI Systems. Google DeepMind. https://arxiv.org/pdf/2310.11986.pdf
White House. (2023, October 30). Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
WIPO. (2024). What is Intellectual Property (IP)? https://www.wipo.int/about-ip/en/index.html
Zwingmann, T. (2023, June 9). Demystifying AI: A Practical Guide to Key Terminology. AI for Business Growth. https://ai4bi.beehiiv.com/p/demystifying-ai-practical-guide-key-terminology