monier

Academic Library Workers in Conversation

AI in Academic Libraries, Part Two

Resistance and the Search for Ethical Uses

Ruth Monnier is the head of research and instructional services at Mount Saint Joseph University, email: ruth.monnier@msj.edu. Matthew Noe is the lead collection and knowledge management librarian at the Harvard Medical School Countway Library, email: matthew_noe@hms.harvard.edu. Ella Gibson is the online learning and instruction librarian and assistant teaching professor at the University of Colorado-Colorado Springs, email: egibson3@uccs.edu.

Academic Library Workers in Conversation is a C&RL News series focused on elevating the everyday conversations of library professionals. The wisdom of the watercooler has long been heralded, but this series hopes to go further by minimizing barriers to traditional publishing with an accessible format. Each of the topics in the series were proposed by the authors and they were given space to explore. This issue’s conversation is the second of two parts that will discuss generative AI and the many concerns that the authors already see playing out in their organizations.—Dustin Fife, series editor

Ruth Monnier: We have talked about the concerns that are most urgent about generative AI and its adoption. How can individuals push back, encourage slow intentional adoption, or completely resist generative AI? Do you agree with Violet Fox’s recommendations in her zine titled A Librarian Against AI; or, I Think AI Should Leave1?

Matthew Noe: I love Violet’s zine so much; I’m currently giving it out to students during our monthly zine workshops as an example! I think all of her recommendations have merit, but the ones I’ve been most engaged with are opt out, ask questions, and harsh the buzz. Opt out is pretty straightforward. As of this interview, I have still never knowingly used ChatGPT and as much as possible avoid engaging with the built-in AI tools showing up everywhere. This tactic is going to become increasingly difficult unless/until the world is convinced that embedding environmentally destructive nonsense machines into everything is a bad idea, but refusing to use it is a key step along the way, I think.

Ruth: Yes! It is so hard to avoid using it and even when you don’t want to. I have been adding “-ai” to all my search engine queries. What an annoying extra step for a product change that no one asked for!

Matthew: As for asking questions and harshing the buzz, my approach kind of combines the two. Every opportunity I have, I am asking why I should want to use generative AI, reminding people about the environmental cost (and asking how it squares with our sustainability commitments), and generally being skeptical of the idea that if only we learn to use this technology, we’ll “save ourselves” from obsoletion. US Census data suggests we’ve lost nearly 80,000 librarian positions since 2006, and the trend has been almost universally downward over that period.2 Over that same period of time, we’ve been chasing the hype cycle from one thing to another, and I’d hardly call doing so a success for the profession. Let’s learn from past mistakes and focus on our core values, our core missions, and not give our jobs over to technology that tries, and fails, to do what we do.

Ruth: Librarians are so important to creating communities and providing human connection—in the research process and beyond. Technology cannot create a relationship; it may facilitate, but it does not create.

Ella Gibson: Matthew and Ruth, in my opinion, at this point fully opting out or even avoiding is really hard. It’s integrated into so many things that sometimes I don’t even realize it. Or I’m seeing faculty across campus encourage students to use it as part of their assignments and then students come to me asking questions. If it’s from their professor, I can’t really say no to following the instructions. I’m not sure either if faculty realize that students don’t understand the risks of using AI and then come to others who are equally lost.

Ruth: Ask questions and harsh the buzz are great options for those who don’t want to be known as AI resisters or luddites, unlike you, Matthew, and I! One way to harsh the buzz is refusal to interact with generative AI media (images, videos, etc.) or things that appear to be created with generative AI. Engagement runs the algorithms, so lack of engagement could be slightly helpful. Another thing I tried during Giving Tuesday was to ask organizations what funds I could donate to that did not intentionally use or support generative AI usage. Asking questions of vendors and organizations is a way to push back and show that individuals are not interested in generative AI being a part of everything. We should be advocates for policies that promote clarity about informed consent and data usage from course syllabi and class assignments to IRB projects and employers’ surveys. Power dynamics can be hard to navigate, and having clear policies help the less powerful. It should be known if any provided information (data) will or will not be used as generative AI inputs and training.

Matthew: We’ve talked about our concerns and a little about where our institutions are with this technology. What would you all like to see from our professional organizations on this topic? So far, I just keep seeing a lot of webinars on adopting it as quickly as possible… surely we can ask for better?

Ruth: From librarian organizations, I have seen webinars, conferences, and even taskforces dedicated to supporting generative AI usage. Any mention of the ethics or concerns (including FERPA compliance) is brief and frequently at the end. To the point where it feels like the expectation is that busy and overworked individuals need to investigate harms and ethics on their own time. I agree 100 percent with you, Matthew; we can and should ask for better. Critical thinking is a part of our job, and it does not seem that we are critically adapting this technology. Generative AI is a collective problem for society, so at a bare minimum, organizations must provide space and oxygen for resisters of generative AI.

Ella: I’m seeing a lot of the same things as you, Ruth, and for me, I’d like to see more thought put into why we’re having some of these conversations on supporting generative AI and what its adoption means for users. When I was a teacher, I felt like we were still having conversations about the pros and cons of having learners adopt and use different educational technologies. Most of these tools too had been around longer than AI, and the conversations about the implications for student use were more intense in consideration to potential harms. It just seems like the excitement for something new is superseding common sense and that professional organizations and others are essentially promoting that too.

Ruth: When I obtained my bachelor’s in education, it was emphasized how important it was to build a classroom community and personal relationships with students. Yet, as you mentioned, Ella, there are still ongoing discussions about current educational technologies and tools before we add in the generative AI integration to those tools and separate generative AI tools. The constant, invasive surveillance via technology3 in education detracts from creating authentic relationships. And this on top of the decline in critical thinking and cognitive offloading being discussed in the profession right now.4

Matthew: Right! And this isn’t a problem limited to just generative AI technology. We’ve seen evidence that overreliance on things like GPS can have negative impacts on spatial awareness, wayfinding, and multiple types of memory.5

Ruth: Technology, in general, but especially generative AI, is helping society lose touchpoints of human connection and, in general, thinking skills. Why should I need to use generative AI to figure out what I want to eat for the week?

Ella: Obviously, Ruth, you need AI to make your grocery list because you just can’t do it yourself. In all seriousness, though, along that line of thinking, at what point is this data being recorded or saved? Who needs to know what I’m eating or how I’m wanting to write a letter? Why is this being tracked? What is it eventually going to be used for?

Ruth: As a resistor, luddite, and lamplighter on generative AI, frequently I am asked: Would you ever (knowingly) use generative AI? Personally, I feel when this question is asked, it is a reframe of the webinars and topics of “How to ethically use generative AI [product name]” and those who ask might believe that there are ways to ethically use an unethical technology. How would you answer that question, Matthew and Ella?

Matthew: I get this kind of “Well, what about xyz” response constantly, Ruth! Depending on the setting, and the asker, I take one of two approaches to answering. If I think they are asking in good faith, I’m willing to entertain any and all scenarios and discuss potential good uses of generative AI. For example, right now I can’t think of a scenario in which I would want to use this technology in librarianship, but if I switched lenses to drug development, I can imagine the potential value of this technology. “Potential” is a key word here though, since many of the proclaimed victories of this technology have been hyped-up and/or outright misinformation. If, as is more often the case, I’m being asked these questions by someone who either just wants to win an argument or is trying to force adoption on me (two approaches we might call bad faith), I stick to the high-level objections I have: notably, that the environmental and labor costs of this technology as we stare down climate change mean I am not interested.

Ella: Ruth and Matthew, I’m not sure if I’ve ever been asked this, and honestly, I’ve tried to engage as much as possible because I want to know as much as possible. I have used Copilot at work, but it’s led to too many questions of security and privacy and other concerns for me to want to use it expansively. It’s a weird conundrum though. It’s not in any of my workflows though, and I definitely don’t see myself doing anything actively with AI, especially in my instruction practices. I know I’m interacting with it as a byproduct of others’ use and in connection to other work though.

Ruth: Ella, I understand it isn’t currently in your workflows and we sometimes interact through others’ workflows, but I think there are too many ways that we are perpetuating harms and biases built into the training data and the lack of consent in all the processes.6 Because of the lack of consent in gathering the data and purposely ignoring people’s intentions and copyright by scraping the open web, I really worry about another Henrietta Lacks situation where the continuation of harm is unknown and the lasting impact of that.7

Matthew: Now, the possible value of generative AI as an adaptive technology is one thing that gives me pause. For my own part, I’m not convinced that the benefits outweigh the harms right now, but there are others in the disabled communities who think the benefits are worth it. I hope to see more discussion about this in the future—we need it desperately—but I caution anyone against making broad statements about how disabled folks feel about generative AI. There isn’t one view, and stereotyped thinking isn’t going to help anyone.

Ruth: How do you feel when you see things like from Zoom “This meeting is being transcribed for AI Companion” and the only option is to click “Ok”? There is no way to know who is using it or to opt out of it. Just like when I saw on a report from a DigitalCommons repository referral on my published work came from perplexity.ai—negative to zero excitement.

Ella: I mean, it’s either you can’t opt out, or you’re automatically opted in and you don’t even know it. I think it was LinkedIn that automatically enrolled me in their “Data for Generative AI Improvement”? Why would I want you to collect even more data from me than I already know you collect?

Matthew: So true, Ella, and if I had one final thought, or plea, to readers it would be this: seek consent in professional spaces before you enable AI tools. While we might disagree about the value of generative AI, or how it interacts with many of our professions’ core values, I don’t think anyone can deny that these tools pose major privacy risks, and it should be up to each individual whether to take that risk or not. So, before you enable whatever AI companion you’ve got in mind for virtual meetings, for editing a paper, or for summarizing a conversation, ask and receive consent from every person involved. And respect their answer.

Notes

  1. Violet Fox, A Librarian Against AI; or, I Think AI Should Leave [zine], November 2024, https://violetbfox.info/against-ai/.
  2. DPE Research Department, “Library Professionals: Facts & Figures,” Department for Professional Employees (DPE) AFL-CIO, 2024, https://www.dpeaflcio.org/factsheets/library-professionals-facts-and-figures#_ftn1.
  3. “Studying Under Surveillance: The Securitisation of Learning,” PrivacyInternational (PI), November 7, 2024, https://privacyinternational.org/long-read/5463/studying-under-surveillance-securitisation-learning.
  4. M. Gerlich, “AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking,” Societies 15, no. 1 (2025), https://doi.org/10.3390/soc15010006.
  5. Louisa Dahmani and Veronique D. Bohbot, “Habitual Use of GPS Negatively Impacts Spatial Memory During Self-Guided Navigation,” Scientific Reports 10, no. 6310 (2020): 1–14, https://doi.org/10.1038/s41598-020-62877-0; Rebecca Solnit, “I told you it was bad. Paper maps are good. Winging it is good. Getting mildly lost is also good” Facebook post, July 6, 2022, https://www.facebook.com/rebecca.solnit/posts/pfbid02DMWkjpbvYJt3V6pC71zvMonHDoxLEJ5NpzvZP2YSWPV6C2LR7Ac55jSczVVC7sfHl.
  6. Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (NYU Press, 2018).
  7. Rebecca Skoolt, The Immortal Life of Henrietta Lacks (Crown, 2011).
Copyright Ruth Monnier, Matthew Noe, Ella Gibson

Article Views (By Year/Month)

2025
January: 0
February: 0
March: 0
April: 0
May: 2764
June: 1035
July: 503
August: 317
September: 169
October: 137
November: 172
December: 232