mic

Envisioning AI’s Role in Libraries

Perspectives from an LIS Student, a Library Director, and a University Librarian

Russell Michalak is director of the library and archives at Goldey-Beacom College, email: michalr@gbc.edu. Trevor A. Dawes, a librarian, educator, and consultant, is the co-founder of Inclusive Knowledge Solutions, email: tadawes@udel.edu. Ava Wallace is an MLIS Student at San José State University, email: ava.wallace@sjsu.edu.

Generative artificial intelligence (GenAI) is no longer a distant horizon for libraries—it is already reshaping how we teach, preserve, and connect communities with knowledge. The question is not whether AI belongs in libraries but rather how we integrate it in ways that are ethical, equitable, and aligned with our missions. This article is framed as a dialogue among colleagues at three different career stages, offering distinct but complementary perspectives. As a current LIS student, Ava Wallace brings the urgency of an emerging professional grounded in social justice and cultural preservation. At the mid-career stage, Russell Michalak contributes the perspective of a library director, emphasizing ethical integration and scalable strategies that position libraries as leaders in AI literacy and institutional change. Trevor A. Dawes, a senior leader in the profession, offers national insights on how library organizations can shape responsible adoption. Together, our perspectives—emerging, mid-career, and senior leadership—highlight both opportunities and challenges, pointing toward a shared vision for how libraries can remain human-centered in an AI-driven world.

The Promise and Peril of GenAI

GenAI presents both promise and peril. On one hand, Russell has witnessed students use it to unlock new forms of learning—revising narrative essays with greater confidence, summarizing difficult articles in accessible ways, or even visualizing concepts that help them grasp ideas more fully. These are real moments of empowerment in which GenAI serves as a tool to extend creativity and deepen understanding.

For Russell, the promise is real, but so is the peril. AI can extend creativity and deepen learning, but without equity at the center—without closing the gap between “free” and “full”—it risks becoming another barrier rather than a bridge. One example came during a first-year writing session where a student used Grammarly’s chatbot to “make the essay more juicy.” The output was polished, energetic, and—at first glance—impressive. But when the class compared the revision with the student’s original draft, the room grew quiet. The student admitted that though the GenAI had dressed up the prose, it no longer sounded like them. Their voice, their humor, their perspective had been flattened into something generic. This realization sparked a powerful discussion about authorship, authenticity, and what it means to develop a personal academic voice in the age of GenAI.

For Ava, these questions are powerful reminders not to get caught up in “AI hype” or let the technical capabilities of AI overshadow what is lost through automation. She also highlights the tension from another angle—underscoring both opportunity and risk for the profession. She sees the potential for GenAI to accelerate cataloging and clear long-standing backlogs, freeing professionals to focus on community programming and direct patron support. Yet she also voices unease at what is risked in this tradeoff: “In the worst case, increased automation depersonalizes services, alienates users, and reduces entry-level LIS positions.” For Ava, AI is not just about technical efficiencies; it also carries implications for labor, identity, and the professional pathways that sustain librarianship. Her reflections remind us that in fields already strained by funding cuts, we must be vigilant about whether GenAI strengthens or erodes our collective capacity to serve.

Trevor, in turn, emphasizes a different dimension of promise—one centered on access and discovery. He highlights AI’s ability to make the invisible visible: transcribing handwritten diaries, translating materials across languages, and generating metadata that surfaces collections once hidden behind poor indexing. These tools could democratize access to knowledge, making it possible for a student searching a library catalog to discover a family history letter or a rare archival document that would otherwise remain buried. For Trevor, AI is an accelerant to discovery, extending the reach of libraries to patrons who may not otherwise benefit from specialized knowledge or language expertise.

Together, these perspectives are a reminder that GenAI is neither inherently progressive nor inherently harmful—it is a tool whose value is determined by how we choose to use it. Russell’s concerns about equity, Ava’s cautions about labor and depersonalization, and Trevor’s vision for expanded access converge on a single point: Libraries must steer adoption with intention. If we allow efficiency or market pressures to dictate use, we risk undermining our mission. But if we ground implementation in our enduring values—equity, preservation, and service—GenAI can be shaped into a resource that strengthens, rather than weakens, the human core of librarianship.

Equitable Access as the Central Challenge

The defining challenge of AI in libraries is equitable access. Tools are only as powerful as the communities that can use them, and when access is limited to those who can afford personal subscriptions, inequity deepens. Free versions exist, but free is not the same as full—“freemium” models often restrict functionality or monetize student data, leaving those without means at a disadvantage. At Goldey-Beacom College, a Hispanic-Serving Institution where many students are first generation and balancing work with school, equity is a lived necessity. This is why Russell prioritized securing campuswide licenses: They ensure that every student—whether first-generation, working-class, or international—has the same opportunity to engage with GenAI. Equitable licensing is not just a technical choice; it is a justice issue, one that prevents GenAI literacy from becoming stratified along economic lines. It also reflects CALM leadership in action: Communication to explain clearly what students can expect from these tools, Adaptability to adjust licensing and access as needs evolve, Learning to ensure both students and staff build capacity alongside technology, and Management to embed ethics and accountability into every decision.

Ava extends Russell’s point by situating access challenges within broader structural realities of funding and staffing. Libraries and archives, she notes, routinely face budget cuts, staff shortages, and mounting workloads. Against this backdrop, AI can look like a lifeline. But Ava is clear eyed about the danger of seeing it as a substitute for sustainable investment: “Given that public services are in a constant state of funding crisis, I don’t blame people for seeking band-aids. But that is how we should discuss them—as emergency measures while we advocate for the resources communities truly need.” For her, equitable access means not just distributing tools but also ensuring that communities are not forced to settle for technology in place of people, care, and long-term support.

From Ava’s focus on local and structural challenges, Trevor broadens the conversation to the national stage, pointing to the role of professional organizations in shaping equitable frameworks at scale. He argues that associations like ARL and ACRL must build strategies that support all institutions, not only the best-resourced. Cooperative purchasing agreements modeled on database consortia could keep smaller libraries from being priced out of AI adoption. Just as importantly, Trevor emphasizes nonnegotiable privacy standards so that students and patrons at every institution can trust that their data is secure. Shared infrastructure and training, he suggests, would also help libraries avoid duplicating costs and reinventing the wheel. His vision is of a professional community that ensures equity not just within institutions but also across the ecosystem of libraries.

The lesson from these perspectives is clear: Equity must remain at the center of AI adoption. Russell’s work shows how campus-level licensing can level the playing field, Ava highlights the local risks of treating AI as a replacement for funding, and Trevor outlines national strategies to prevent inequities from widening. Whether through institutional policies, collaborative consortia, or professional standards, libraries must ensure that AI access is not stratified along lines of privilege. Only then can AI serve as a tool of democratization rather than division.

Teaching Literacy, Preserving Human Judgment

The greatest risk of GenAI in libraries is not the technology itself but rather the erosion of the critical thinking skills our profession has always nurtured. Students can already generate text, summaries, and even citations at the click of a button—but do they understand what makes an argument credible or a source trustworthy? Russell has embedded AI literacy into the first-year curriculum precisely because he views it as inseparable from information literacy. When introducing students to AI tools, he doesn’t frame them as shortcuts but rather as opportunities to ask deeper questions: Where did this information come from? What biases might be embedded here? What data were excluded? Students learn that GenAI is not a neutral oracle but rather a constructed system whose outputs must be tested, interrogated, and sometimes challenged.

Reinforcing this concern about embedding AI literacy into curricula, Trevor underscores the urgency of this approach, warning that without intentional guidance, libraries risk losing their central role in cultivating critical thinking. He observes that AI’s capacity to produce polished, plausible-sounding responses makes it especially dangerous in educational contexts. “The erosion of information literacy and critical thinking skills is my deepest concern,” he explains. “When AI can instantly generate plausible-sounding answers, many patrons may lose the ability to evaluate sources or understand the difference between synthesis and analysis.” The challenge is not just about helping students recognize bad information; it is also about preserving the deeper habits of mind that make scholarly inquiry possible. His warning reminds us that libraries must step into an even more active role as educators, reinforcing the difference between knowledge and noise.

Responding to Trevor’s concern about erosion of critical thinking, Ava adds another layer, expanding the very definition of literacy to include ethical discernment as well as functional use. From her perspective as an archival intern, literacy is not limited to functional use but also includes ethical discernment: “Literacy does not only mean how to use something; it also means how to think critically about it, how to assess its accuracy, and how to determine when it is or is not an appropriate tool to turn to.” Her words highlight an often-overlooked dimension: Literacy is not adoption for its own sake but rather the cultivation of judgment. Ava’s reminder resonates with Russell because it recenters the human role in determining when speed, automation, or convenience should yield to slower, more intentional practices of learning and preservation.

Libraries are uniquely positioned to preserve human judgment in an AI-saturated world. We have long been trusted as guides who help patrons navigate evolving forms of information. Embedding AI literacy early and consistently in our teaching ensures that students gain not just technical competence but also the discernment to know when and why to use AI—and when to resist it. Russell’s emphasis on embedding AI literacy through the CALM framework, Trevor’s concern about losing critical thinking, and Ava’s insistence on critical discernment converge here: Literacy must be our anchor. By holding fast to this mission, libraries can ensure that the rapid adoption of AI does not erode our most essential human capacities.

Visions for 2030

By 2030, Russell sees libraries stepping forward as national anchors of AI literacy, shaping how communities across the country learn, question, and use these tools responsibly. Our role will not simply be to adopt AI but also to ensure its integration reflects equity, transparency, and care. Just as libraries once bridged the digital divide, we will bridge the AI divide—securing access that is not determined by income, geography, or institutional size. This future will require leadership grounded in the CALM framework. In this vision, campuswide licenses, shared training models, and national standards become not just technical fixes but also expressions of CALM leadership. Libraries become the places where technology is democratized and where the public learns not only how to use AI but also how to preserve judgment, voice, and cultural memory in an AI-driven society.

Russell’s vision emphasizes equitable access and the CALM framework, and Trevor shares a similar aspiration but frames the future as a cultural transformation of libraries into “vibrant community learning laboratories where AI handles routine tasks and librarians focus on critical thinking, cultural preservation, and human connection.” For Trevor, the promise of AI is not simply in what it can do but also in what it frees us to do—create more space for librarians to serve as teachers, mentors, and cultural stewards. His vision situates libraries as both technologically advanced and profoundly human, places where efficiency enables deeper engagement with the communities they serve.

Ava, however, challenges both of these visions with a reminder that even as we look ahead, efficiency cannot be our only measure of progress. For her, the future of libraries must also be grounded in justice, care, and preservation. She imagines institutions that adopt AI thoughtfully, always balancing technological innovation with commitments to social justice and cultural preservation. Efficiency, she warns, must never eclipse care. For Ava, the library of 2030 should not be measured by how quickly it can process collections or deliver answers but rather by how well it preserves relationships, histories, and communities. Her vision is a challenge to remember that progress is not just about speed but also about values.

Together, these perspectives form a collective aspiration: libraries that embrace technological change while staying firmly grounded in the principles that have always defined our work—equity, preservation, and human connection. By 2030, Russell hopes libraries will stand out as centers of AI engagement, where librarians actively partner with students and faculty to ensure critical thinking guides every interaction with AI, supported by ongoing training for librarians themselves. With Russell’s focus on equitable access and literacy, Trevor’s emphasis on community laboratories, and Ava’s emphasis on justice and care, we outline a shared future where libraries remain indispensable guides through a rapidly evolving information landscape.

These viewpoints explore both the opportunities and challenges of AI in libraries. In a companion article in the June 2026 issue of C&RL News, we extend this dialogue by examining how these perspectives intersect around innovation, equity, and responsibility across the profession.

Copyright Russell Michalak, Trevor A. Dawes, Ava Wallace

Article Views (By Year/Month)

2026
January: 0
February: 0
March: 0
April: 1
May: 0