Academic Library Workers in Conversation
AI in Academic Libraries, Part One
Concerns and Commodification
© 2025 Ruth Monnier, Matthew Noe, and Ella Gibson
Academic Library Workers in Conversation is a C&RL News series focused on elevating the everyday conversations of library professionals. The wisdom of the watercooler has long been heralded, but this series hopes to go further by minimizing barriers to traditional publishing with an accessible format. Each of the topics in the series were proposed by the authors and they were given space to explore. This conversation is the first of two parts that will discuss generative AI and the many concerns that the authors already see playing out in their organizations.—Dustin Fife, series editor
Ruth Monnier: Thanks for joining me for this conversation from the East Coast to the Rocky Mountains! Everyone seems to be talking about Generative AI—ChatGPT, Claude, Gemini, Copilot, etc.—whether it is newspaper articles touting the benefits of its usage or entire academic conferences devoted to it. Generative AI is everywhere these days—from classrooms to embedded in vendor products to Zoom rooms to social media scraping content. Most businesses and universities are jumping on the bandwagon to adopt and utilize this new and rapidly evolving technology. What do you think about generative AI usage within libraries and in society?
Matthew Noe: Y’all, the hype cycle is real and here it often feels like the choices are to get on board or lay down on the tracks to become part of a philosophy joke. That said, it isn’t exactly universal across academia. High up in the administrative chain, there is a lot of buzz about “potential” and “we have to stay relevant,” but that has come with a lack of clear guidelines, directions, and requirements, leaving each school, department, really, to make its own choices. The main tool adopted at my workplace so far has been ChatGPT, specifically OpenAI ChatGPT Edu, which IT is providing access to and support for. Others, such as DynaAI—an optional add-on for Dynamedex—are in the works.
As for what I think about this technology . . . I’ve never been called a Luddite more often in my life, but it is a badge I am proud to be wearing right now. Why is that? Well, despite the common usage of the term to indicate “backward” feelings toward technology, the Luddites were less concerned about technology itself than with the ownership of that technology and the power it imparted. Rather than sitting idle while their jobs were automated, all for the benefit of factory owners, they fought back—through community building, education, and, yes, the occasional destruction of property. (If you’d like to learn more about this, I highly recommend Brian Merchant’s Blood in the Machine.1)
Ruth: Matthew, I agree there is a lack of universal guidelines and practices for generative AI or even standardized at institutions and in higher education. When guidelines exist, they seem to be focused on student usage only (i.e., syllabi language) but no one else’s usage. Generative AI disclosure language should be provided to students, too! Honestly, I am a little shocked that there haven’t been more liability lawsuits or lawsuits2 about HIPPA and FERPA violations with generative AI usage yet. Definitely feels like we are trying to build a concept of a plane while we are flying.
Ella Gibson: Ruth, it’s been really interesting to see how quickly people want to adopt and use generative AI—particularly within the library. At my institution, library faculty and staff are actually testing out Copilot to see how to integrate it into our workflows and what that could look like in the future. Weirdly, it doesn’t work with the desktop version of anything I have currently. Which I think speaks to some of the ways this technology is still developing and isn’t really fully understood or integrated.
Ruth: Ella, I agree it is weird how fast universities, corporations, and individuals have pushed for the adoption and usage of generative AI, especially in the context of the rhetoric about “how bad” social media and cellphones are within a K-12 environment. What makes this technology unique or special that we as a society feel the immediate need to use and adopt it compared to other previous technologies? To be fair, there have been minor societal push backs, e.g., Taylor Swift deepfake in early 2024, the 2024 Olympics commercial about Gemini writing the letter to the athlete for the little girl, and the initial roll out of Google glasses. However, beyond those minor pushes, it seems that generative AI is baked into most products, from Grammarly to LinkedIn to Microsoft Office to Adobe to iPhone 16, already without users having any option to opt-in to using generative AI or their data being used for generative AI.
Matthew: Ruth, you’ve hit the nail on the head with some of my worries about the rapid embeddedness of this technology. The risks of deepfakes seem to be readily accepted by mainstream discussions, but the other ethical problems with GenAI—and there are an abundance of them—continue to receive far less attention. For me, both personally and professionally, the environmental costs of this technology are alarming and lead me to question how we can possibly accept it as a new normal. Particularly in light of our own professional organizations, and many of our institutions (like mine), making pledges to value sustainability and reduce consumption. While exact numbers for both water and electricity are hard to come by (the companies aren’t exactly forthcoming and regulators haven’t forced the issue—yet), researchers from UC Riverside estimate that for each 100-word email generated by ChatGPT-4, it costs a bottle of water (519 milliliters) and the electricity required to power 14 LED light bulbs for an hour (0.14 kilowatt-hours).3 When you take into account how many emails people send in a day, how often these tools (fail to) get it right the first time, and that this is just for email, the numbers get large, quickly.
Is there a particular concern that is foremost in your minds?
Ruth: The rough estimates about energy usage are scary, Matthew. I think if generative AI had less energy usage, the companies would be out in the street telling us. Also, I find it interesting how many individuals are unaware of the environmental costs while companies are buying nuclear energy plants for generative AI usage.
Matthew: This is part of the worry environmentalists have had for decades, too. We are finally seeing gains in renewable energy generation, but are immediately using more energy, making it harder to turn off nonrenewables.
Ella: I think it’s scarier that library vendors are sending associates to campuses to talk about their AI products, and when you ask them about the environmental costs, they aren’t really sure how to approach it. There were a handful of questions asked recently during a presentation about the environmental costs, and while the representative was definitely newer, they were trying to sell us on this product and its great use in the library. But if this is a genuine concern for the library, and the campus promotes itself as “green,” then you should probably be able to answer questions about that.
Ruth: Matthew, you asked what concern is at the front of my mind, and the problem is all of them. The harms of generative AI are intersected with so many elements of society. I guess, the more urgent concerns are the ones with the longest, continuous harm such as sustainability, which you mentioned, decrease of human interactions, and the power we are giving to these technology companies for a half-created product without any regulations in the interest of “Innovation!” Technology companies are actively blunting individuals’ ability to reason and think. Some students don’t understand the difference between ChatGPT and a Google search. If you don’t have a foundational understanding of the world or a content area, you don’t know that what generative AI produces can often be wrong. Its results simply sound right.
Matthew: I’ve even had people here discuss using ChatGPT to self-diagnose a health problem. Something we tell people not to even use Google for, and that was before misinformed AI-summaries.
Ruth: With generative AI, deepfakes and easy reproduction of Name-Image-Likeness (NIL), someone could be framed or scammed and the ease of misinformation and malicious information spreads even more easily. Character.ai4 and similar chatbots highlight how generative AI can be harmful to individuals and others. Students and individuals don’t realize how much data they are putting into generative AI and how much of their privacy and the privacy of others that they are giving up into the models, especially the free models. Without much needed statutes and heavily enforced regulation, generative AI is the death knell of privacy because if anything was ever captured of you, you cannot stop the models or other people’s usage of your NIL.
Ella: Privacy is a huge concern, and students especially aren’t prepared to understand it. They haven’t been taught about the permanence of things that they share and post. And they don’t fully understand how these things can follow them around for the rest of their lives either. Because of the way we talk about the internet and “the cloud,” there’s often a transient nature to what people post online versus the reality of the potential impacts—either socially or materially. This is particularly true for young adults, so primarily thinking about undergraduate students, this means that they aren’t psychologically there yet—their brain development is still ongoing and they don’t know all the implications of their choices today. To me, this is an issue because not only are we leaving students in a vulnerable spot, but it’s something that the population at large doesn’t seem to be very knowledgeable about either.
Ruth: Great point, Ella. If the precedent of social media and government regulation—it took Instagram over 14 years for the 13-year-old profile setting—is anything to go by, we cannot and should not expect that the technology companies and generative AI will regulate themselves or think of the common good while developing their new models. Furthermore, the US has a federal government that seems primed to oppose any sort of regulation, which raises a whole host of concerns. One example, how are you or anyone else to challenge a decision that was made by generative AI as these tools are proprietary black boxes? Yet there are government agencies that are using generative AI to make decisions about court claims5 in the name of efficiency.
Matthew: Ruth, that “black box” piece is such a huge concern! How can we advocate for the use of this technology in research when there is no way to replicate its findings? We can’t even promise that if we provide a prompt, you will get the same output!
Ruth: It can be argued that generative AI was built on copyrighted materials and intellectual property theft. Not to mention, the literal cases of child abuse used to train the models.6 Hence, all the lawsuits.7 Yet the companies will claim if it is on the web “it is fair use”; but we have seen individuals laid off8 and individual creators’ incomes decreased.9 If the technology companies win the lawsuits, intellectual property and copyright as we know it will be completely changed and precedent will no longer matter. It seems that the power is in the hands of the technology companies and not individual members of society. In other words, “tech bros” are becoming tech oligarchs.
Matthew: Ruth, that “tech oligarchs” piece is such a cause for concern and should keep our profession up at night. And it is almost certainly by design, despite all the claims that this technology is “democratizing,” which, even if we weren’t already seeing a shift to subscription-based access to the tools, was never true. I rather liked the way Ali Alkhabit put it recently: “AI is an ideological project to shift authority and autonomy away from individuals, towards centralized structures of power. Projects that claim to ‘democratize’ AI routinely conflate ‘democratization’ with ‘commodification’.”10 Now, as library professionals, we have to help find an intentional way forward that focuses on ethics and not acquiescing to technological oligarchies, but much more on that in part 2, coming up in the May issue.
Notes
- B. Merchant, Blood in the Machine: The Origins of the Rebellion Against Big Tech (Little, Brown, 2023).
- C. Papst, “Maryland School Potentially Violates Student Privacy Rights by Using AI Detector,” Fox News 45, October 7, 2024, https://foxbaltimore.com/news/project-baltimore/maryland-school-potentially-violates-student-privacy-rights-by-using-ai-detector-anne-arundel-county-tara-davis-ferpa-gptzero; D. Ocampo, “AI Class Action Knocks on California Court’s Door,” FMG Blogline, April 10, 2024, https://www.fmglaw.com/cyber-privacy-security/ai-class-action-knocks-on-california-courts-door/; M. Vogel, Michael Chertoff, Jim Wiley, and Rebecca Kahn, “Is Your Use of AI Violating the Law? An Overview of the Current Legal Landscape,” N.Y.U. Journal of Legislation & Public Policy 26 (2024), https://www.equalai.org/wp-content/uploads/2024/09/Vogel_et_al_Sep_13_2024.pdf.
- P. Verma and S. Tan, “A Bottle of Water per Email: The Hidden Environmental Costs of Using AI Chatbots,” Washington Post, September 18, 2024, https://www.washingtonpost.com/technology/2024/09/18/energy-ai-use-electricity-water-data-centers/.
- B. Allyn, “Lawsuit: A Chatbot Hinted a Kid Should Kill His Parents over Screen Time Limits,” Morning Edition, December 10, 2024, https://www.npr.org/2024/12/10/nx-s1-5222574/kids-character-ai-lawsuit.
- “AI Expedited Claims Appeals Process in Nevada,” StateScoop.com, December 3, 2024, https://statescoop.com/video/timothy-galluzi-nevada-cio-ai-2025/.
- D. Theil, “Investigation Finds AI Image Generation Models Trained on Child Abuse,” Stanford University Cyber Policy Center, December 10, 2023, https://cyber.fsi.stanford.edu/news/investigation-finds-ai-image-generation-models-trained-child-abuse.
- K. Knibbs, “Every AI Copyright Lawsuit in the US, Visualized,” Wired, December 19, 2024, https://www.wired.com/story/ai-copyright-case-tracker/.
- C. Thorbecke, “AI is Already Linked to Layoffs in the Industry that Created It,” CNN, July 4, 2023, https://www.cnn.com/2023/07/04/tech/ai-tech-layoffs/index.html.
- US Federal Trade Commission, “Generative Artificial Intelligence and the Creative Economy Staff Report: Perspectives and Takeaways,” December 2023, https://www.ftc.gov/system/files/ftc_gov/pdf/12-15-2023AICEStaffReport.pdf.
- A. Alkhatib, “Defining AI,” Ali Alkhatib (blog), December 6, 2024, https://ali-alkhatib.com/blog/defining-ai.
Article Views (By Year/Month)
| 2026 |
| January: 169 |
| 2025 |
| January: 0 |
| February: 0 |
| March: 0 |
| April: 3918 |
| May: 1525 |
| June: 587 |
| July: 605 |
| August: 422 |
| September: 398 |
| October: 326 |
| November: 283 |
| December: 307 |