gra

The Way I See It

It’s Not Easy Staying Human

Generative AI, Cognition, and Reflection

Maxwell Gray is digital scholarship librarian in Raynor Library at Marquette University, email: maxwell.gray@marquette.edu.

It’s not easy being an academic librarian in this moment of generative artificial intelligence (AI). For librarians in research, teaching, and learning, it’s like being a tour guide in a city whose critical infrastructure is in the middle of being rebuilt.

AI overviews and research assistants are changing how we search and think about research.1 At the same time, ChatGPT and Copilot are changing how we think about the difference between research and writing.2 Librarians in higher education are trying to teach students and faculty new technology and answer new questions about authorship and academic integrity.3

It’s even harder if you’re a librarian whose values don’t align with the values of this new technology and its creators. It’s like being a tour guide in a city whose critical infrastructure is being redesigned and rebuilt by technocratic oligarchs outside any democratic process.4 We are trying to teach patrons how to use new technology and how to think critically and responsibly about this new technology.5

Last year, I collaborated with colleagues at Marquette’s Center for Teaching and Learning to facilitate a community of practice and lead a workshop series about generative AI. This year, I’m collaborating with colleagues at the Center for Teaching and Learning again to curate a conversation series about generative AI and Ignatian pedagogy.

Teaching at a Catholic, Jesuit university, we prioritize Ignatian pedagogy to orient our pedagogies toward personal trans/formation and ground them in intellectual, spiritual, and social inquiry.6 Approaching generative AI from the tradition of Ignatian pedagogy feels especially critical when this technology tempts us to think less, feel less, and care less—in other words, to be less human (or to embrace less of our humanity).7

These collaborations at the Center for Teaching and Learning have sought to teach faculty and other educators how to approach generative AI critically and ethically and how to teach students to do the same (rather than how to use any specific tool or tools). In and around these projects, I have been part of many conversations (often long) with faculty and other educators about this technology and how it is impacting teaching and learning (often not for the better).

These experiences have taught me we need to do better at speaking and connecting with our colleagues and our students about this technology and how it is impacting how all of us think and learn. In 2025, researchers at MIT published a study where they found major cognitive costs of using a large language model (LLM) tool for essay writing.8 They found that young people who used an LLM tool for essay writing showed weaker neural connectivity and underengagement of neural networks than those who did not. Students and lifelong learners alike should learn and reflect on how using generative AI may impact how we do and don’t think and learn. Young people especially should reflect on how they want to think and act in the world in this moment in response.

In the meantime, I believe this new technology will afford new forms and modes of thinking. Dialogue between theory and practice will be critical for building these new forms and modes. The popular terminology for these new forms and modes is “AI literacy.”

Last year, Leo S. Lo published a guide for academic libraries about AI literacy in which he proposed core components and themes for AI literacy in academic libraries.9 ACRL’s new “AI Competencies for Academic Library Workers” reflects many of the core components and themes for AI literacy proposed by Lo.10 One core component or theme I propose is missing from these documents is the human, cognitive impact of generative AI.

In academic libraries, I think we should learn and teach how to reflect on the personal, mental, and cognitive dimensions of AI. The personal impact of AI happens at a more individual scale than its societal impact; it calls us to reflect on how AI—especially generative AI—is reshaping how we think, feel, and relate with each other, in terms of both cognitive science and the humanities and social sciences. This technology will strengthen some ways of thinking, feeling, and caring, and it will weaken others. How we use this technology will also build new, emergent ways of thinking, feeling, and belonging that we don’t recognize yet in this moment. What new ways do we and our students want to build together?

For many of us, this kind of reflection may be frightening, but it should also be exciting, especially for academic librarians in teaching, learning, and pedagogy. It should also be exciting for young people who need to create a hope-filled future for themselves. What values, dreams, and challenges will ground and inspire young people in this moment of technological, social, and personal trans/formation? Where and how will our students find data, information, knowledge, and, ultimately, wisdom? How will our students create these for themselves and for others?

In conclusion, Lo proposes that thinking about human–AI collaboration “focuses on using AI to enhance, not replace, human abilities. It promotes a mindset where individuals see AI as a supportive partner in tasks like decision-making, creativity, and problem-solving, rather than as a substitute for human judgment and skills.”11 Thinking about cognition and reflection may refocus on practices of thinking, feeling, and reflecting back again on how we do, don’t, and desire to think, feel, and care as unique human persons. It should promote a mindset where individuals and society recognize the most human skill is thinking critically, responsibly, and ethically about our own thinking and acting, in community and solidarity together.

Notes

1. Tessa Withorn, “Google AI Overviews Are Here to Stay: A Call to Teach AI Literacy,” College & Research Libraries News 86, no. 5 (2025): 214–17, https://doi.org/10.5860/crln.86.5.214; Jay Singley, “We Couldn’t Generate an Answer for Your Question,” ACRLog (Association of College and Research Libraries, July 21, 2025). https://acrlog.org/2025/07/21/we-couldnt-generate-an-answer-for-your-question/.

2. Amy B. James and Ellen Hampton Filgo, “Where Does ChatGPT Fit into the Framework for Information Literacy?: The Possibilities and Problems of AI in Library Instruction,” College & Research Libraries News 84, no. 9 (2023): 334–41, https://doi.org/10.5860/crln.84.9.334; Matthew Pierce, “Academic Librarians, Information Literacy, and ChatGPT: Sounding the Alarm on a New Type of Misinformation,” College & Research Libraries News 86, no. 2 (2025):68–70, https://doi.org/10.5860/crln.86.2.68.

3. Zoë (Abbie) Teel, Ting Wang, and Brady Lund, “ChatGPT Conundrums: Probing Plagiarism and Parroting Problems in Higher Education Practices,” College & Research Libraries News 84, no. 6 (2023):205–8, https://doi.org/10.5860/crln.84.6.205.

4. Ruth Monnier, Matthew Noe, and Ella Gibson, “AI in Academic Libraries, Part One: Concerns and Commodification,” College & Research Libraries News 86, no. 4 (2025):173–77, https://doi.org/10.5860/crln.86.4.173; Ruth Monnier, Matthew Noe, and Ella Gibson, “AI in Academic Libraries, Part Two: Resistance and the Search for Ethical Uses,” College & Research Libraries News 86, no. 5 (2025):218–23, https://doi.org/10.5860/crln.86.5.218.

5. Shoshana Frank, “The Fall of Creativity: A Librarian’s Role in the World of AI,” College & Research Libraries News 84, no. 11 (2023):439–41, https://doi.org/10.5860/crln.84.11.439.

6. Rebecca S. Nowacek and Susan M. Mountin, “Reflection in Action: A Signature Ignatian Pedagogy for the 21st Century,” in Exploring More Signature Pedagogies: Approaches to Teaching Disciplinary Habits of Mind, eds. Nancy L. Chick, Aeron Haynie, and Regan A. R. Gurung (Stylus Publishing, 2012), 129–42. Also available online and open access at Theology Faculty Research and Publications (e-Publications@Marquette): https://epublications.marquette.edu/theo_fac/177.

7. Jacob Riyeff, “Journeying with Youth by Affirming the Value of Their Limits,” Conversations on Jesuit Higher Education (National Seminar on Jesuit Higher Education, February 4, 2024). https://conversationsmagazine.org/journeying-with-youth-by-affirming-the-value-of-their-limits-98fb1a4f5ea7. See also Matthew J. Gaudet, Noreen L. Herzfeld, Paul J. Scherz, and Jordan Joseph Wales, eds., Catholic Church Dicastery for Culture and Education Centre for Digital Culture, and AI Research Group (Centre for Digital Culture), Encountering Artificial Intelligence: Ethical and Anthropological Investigations (Pickwick Publications, 2024). https://digitalcommons.csbsju.edu/sot_books/115/.

8. Nataliya Kosmyna, Eugene Hauptmann, Ye Tong Yuan, Jessica Situ, Xian-Hao Liao, Ashly Vivian Beresnitzky, Iris Braunstein, and Pattie Maes, “Your Brain on ChatGPT: Accumulation of Cognitive Debt When Using an AI Assistant for Essay Writing Task,” arXiv:2506.08872 [cs.AI], [v1] Tuesday, June 10, 2025, 15:04:28 UTC (35,375 KB). https://doi.org/10.48550/arXiv.2506.08872.

9. Leo S. Lo, “AI Literacy: A Guide for Academic Libraries,” College & Research Libraries News 86, no. 3 (2025):120–22, https://doi.org/10.5860/crln.86.3.120.

10. Association of College and Research Libraries, “AI Competencies for Academic Library Workers,” approved by the ACRL Board of Directors, October 2025. https://www.ala.org/acrl/standards/ai.

11. Lo, 122.

Copyright Maxwell Gray

Article Views (By Year/Month)

2026
January: 0
February: 4