Skip to Main Content

Artificial Intelligence and Libraries

chat loading...

This content is drawn from a report authored by the AU Library's Artificial Intelligence Exploratory Working Group. You can read the groups full report covering the current state of AI and making recommendations to library leadership in the American University Research Archive.

AI and Intellectual Property

Intellectual Property (IP) law has many aspects: copyright, patent, industrial design (design patents), trademark, trade secrets, and geographical indications (WIPO, 2024). Only two of these aspects affect our community meaningfully, copyright and patent, and thus we will focus on them. The role of the libraries is to educate and guide our community members, not to provide legal advice in this developing and technical area of law. Our guidance must be general.

We see two basic scenarios of concern. The first is the use of an AI application to create something (a paper, song, artwork, invention, etc.) and have that use infringe on the intellectual property rights of others. The second is having someone use an AI application trained on the work of our community members to create something that infringes on their intellectual property rights.

In both scenarios, the question of ownership of the intellectual property right is key. Who owns the intellectual property right in the created work? Is it owned by the person using the AI tool? Is it owned by the AI? Or the owner/developer of the AI? Or is it not copyrightable and immediately in the public domain? Do the copyright owners whose works were used to train the AI have an ownership interest here? This is a “new era” in copyright law, and we do not yet have the answers (Broom, 2023; McKendrick, 2022; Palace, 2019). The United States is not the only jurisdiction grappling with these issues (Kretschmer et al., 2022).

Can the AI even “own” any right? The U.S. Copyright Office has been steadfastly opposed to registering copyright claims by non-human entities (McKendrick, 2022). The courts have, so far, agreed. In one notable case, the “Monkey Selfies” case, the U.S. Court of Appeals for the Ninth Circuit held that a primate that took photographs of itself could not have a copyright claim (Naruto v. Slater, 2018). While no one Is yet asserting AI ownership of an intellectual property right, it is an area to watch.

For our community members using AI, we need to guide them so that they are aware of the issues their use of AI entails. Is AI just a tool, like Word, Photoshop, or Audacity, or is it something else, something more independent of its user? Is the AI merely mimicking the works it was trained on, or is it transformative? Is their newly created work truly “theirs.” While this issue crosses over into the area of plagiarism concerns that have motivated much of the discourse on AI in higher education, it is distinct.

Researchers, librarians, and teachers in academia often rely on the “fair use” doctrine to shield their use of existing works for teaching, scholarship, criticism, and commentary, from claims of copyright infringement. Publishers have fought against fair use consistently (McSherry, 2017). A new attack on fair use has been launched by the New York Times against the developers of the major AI Large Language Model (LLM) tools. In a complaint filed in December 2023, the Times claims the defendants use of the copyrighted work of the New York Times is not fair use because it is mimicry and not transformative, and the output is in direct competition with their work (New York Times v. Microsoft et al., n.d.). This is a case that must be carefully watched because of the potential detrimental effects it could have on academic scholarship.

For our community members concerned about their own intellectual property rights being infringed by AI, an understanding of these issues is also important. Fair use is a two-edged sword. What protects academic access to information can also be used to justify the use of copyrighted works for the training of AI tools. We will need to carefully monitor how the courts parse these issues to guide our community well.

We will need to educate our community on the need to fully understand the terms and conditions of any AI tool they use, as well as for any data sets, APIs (application programming interfaces), and the like that they use. We will need to work with other AU offices, including University Counsel and OIT, to help our researchers and creators safeguard their own work and not infringe upon others, in this confusing and rapidly changing environment.