05_HosseiniChalmersFerrill

A Thousand Threats, a Thousand Opportunities

ACRL 2023 STS Hot Topics

Mohammad Hosseini is an assistant professor in the Department of Preventive Medicine at the Northwestern University Feinberg School of Medicine and Galter Health Sciences Library, email: mohammad.hosseini@northwestern.edu. Mark Chalmers is science and engineering librarian at the University of Cincinnati, email: chalmemk@ucmail.uc.edu. Thomas James Ferrill is head of Creative Spaces at the University of Utah J. Willard Marriott Library, email: Thomas.Ferrill@utah.edu.

The ACRL Science and Technology Section (STS) Hot Topics Committee panel discussion held at the ACRL 2023 Conference in Pittsburgh was dedicated to generative artificial intelligence (AI) and large language models (LLMs). Mohammad Hosseini, Thomas James Ferrill, and Mark Chalmers were invited panelists to discuss the impact of LLMs and generative AI in libraries. This event was organized by members of the committee co-chaired by Isabella Baxter and Sandy Avila, and it brought together attendees from a broad spectrum of expertise and roles. The recorded version can be accessed on YouTube.1 In this article, we present a summary of discussed topics along with additional remarks and updates.

The Application of Generative AI for Education

In the session, Mark Chalmers, science and engineering librarian at the University of Cincinnati in Ohio, highlighted how generative AI can aid the coding learning process. He discussed the potential of language models like ChatGPT to improve learning experiences, especially when used as a personal tutor, helping to simplify and explain new concepts a person setting out learning to code inevitably confronts. Over the years, Chalmers has integrated different technologies into the library’s free Python workshops, and he was quick to integrate best practices for using generative AI into them. He teaches how intentional use lowers the entry barriers, creates on-demand individualized support, and simplifies the learning process, which greatly reduces the challenges of learning a new coding language. Chalmers noted:

My perspective on the potential impact of generative AI profoundly shifted after watching the GPT-4 developer livestream in March 2023. During the presentation, OpenAI co-founder Greg Brockman prompted GPT-4 to generate code for a Discord chatbot on the fly. I was struck by this, as Discord’s API changes had previously forced me to overhaul workshop materials on this exact topic. Yet GPT-4 accurately parsed updated documentation and produced working code. Clearly, this technology could be leveraged to overcome common learning roadblocks and support new coders.

Inspired by this new capability, Chalmers created a new workshop titled “A Practical Guide to Learning to Code with ChatGPT.” This workshop focuses on using ChatGPT as a personal AI tutor for Python and demonstrates ways ChatGPT can aid coders, such as explaining new concepts at different levels of complexity, explaining different categories of errors, generating examples and test problems, and explaining documentation such as those related to using a new function. He has also integrated ChatGPT tips and scaffolding into the core Python workshops. Other learning objectives are for students to understand how generative AI works, get a sense of its capabilities and limitations, and cultivate a healthy skepticism toward its output.

Navigating the Hype

Thomas James Ferrill, head of Creative Spaces at the University of Utah J. Willard Marriott Library in Salt Lake City, reminded the attendees about recent developments, with unprecedented advancements at tasks previously thought to pose challenges to AI models. Combined with significant media amplification, the sense of possibility to complete more work and accomplish more tasks have resulted in a hype that has pervaded the first half of 2023. Amid the din, finding practical applications remains challenging. Even in cases where there is immediate and obvious utility in using generative AI to navigate information systems and help process archival data, there are hurdles to adoption that may take years to overcome. Ferrill added:

There are risks in adopting new technology, just as there are risks in avoiding it—the cautious approach of wait-and-see may prove more costly if research processes begin to bypass library systems in favor of emerging techniques that use AI models. If libraries can participate in creating positive impacts, it will be through the marketplace of ideas, which has long trended towards technology adoption.

Despite the oscillation of attention between hype and challenge, one cannot deny that using better tools could lead to better outcomes. It no longer seems outrageous to claim that AI will impact various knowledge workers, spanning many industries and economic sectors. This prospect puts libraries in a position to respond because not only are knowledge processes in libraries subject to immediate impacts (from LLMs specifically), but libraries also serve broad community needs through educational programming and navigating the information ecosystem. From a public service perspective, the libraries, archives, and museums community is already invested in addressing literacy needs, including an urgent need for AI literacy. The path to developing relevant and helpful content will depend on a range of considerations for individual organizations; it also hinges on externalities such as regulatory guidance, meeting existing user demands, and funding for development. However, developing content to support AI literacy is urgent.

Regardless of chosen strategies at an organizational level, libraries will continue to create access to resources in compliance with their institutional values. For better or worse, dispelling the hype and offering training on generative AI applications require the same approach that libraries have always taken—helping individuals navigate their information environments through competent tool usage. A pragmatic approach to AI adoption in libraries is neither to reject participation nor to blindly adopt the latest fad.

Why do Researchers Need Generative AI and How is Research Benefiting from This Technology?

Mohammad Hosseini, an assistant professor in the Department of Preventive Medicine at the Northwestern University Feinberg School of Medicine and Galter Health Sciences Library in Chicago, offered a researcher’s perspective on why generative AI is needed. He highlighted that for a while, access to information was not keeping pace with the increased production of knowledge, but the internet and the open science movement significantly changed this landscape. Hosseini added:

Access to more information does not extend the day for researchers to read more or analyze more information. In fact, while it is true that access has increased and search engines find information quicker than before, because information is fragmented, finding useful pieces and then comprehending, analyzing, and employing information are more complicated than ever.

Technical solutions—such as web scraping, integrating data, standardizing storage and retrieval, and implementing knowledge management systems—do not help researchers comprehend, analyze, and consolidate information. Hosseini added that in the current research landscape, researchers and their limitations could be considered among bottlenecks because available information is so abundant that researchers cannot use them effectively and efficiently, and this is where generative AI and LLMs could help researchers in the future. Hosseini offered three examples wherein using this technology has benefited research:

  1. Madimate Assistant, developed by David M. Liebovitz based at Northwestern University, uses ChatGPT as a baseline language generating model, which is fine-tuned based on a specific and small medical library. In its most recent version, Medimate offers various options to help medical students and research assistants to search the literature efficiently.
  2. A recent collaborative project between the National Human Genome Research Institute and Northwestern University used fine-tuned generative AI to explore the History of Genomics and Human Genome Project archives to understand how the Human Genome Project developed collaboratively, and how scientific goals were formulated and evolved. This project used LLMs for content extraction and masking sensitive information.
  3. Another example pertained to a recent predictive model2 developed by researchers at the New York University to improve accuracy of clinical predictions related to readmission, mortality, comorbidity, length of stay, and insurance denial. Researchers in this project trained an LLM for medical language (NYUTron) and subsequently fine-tuned it across predictive tasks. LLMs were used to read the notes written by physicians and provided access to a comprehensive description of patients’ medical state.

Conclusion

The 2023 STS Hot Topics event offered an opportunity for rich discussions and presentation of intersecting perspectives on the applications, challenges, and ethical considerations about using generative AI and LLMs in libraries. While these technologies can positively impact how libraries support learning, research, and knowledge dissemination, they must be steered by professional values. Libraries are hubs of learning and knowledge sharing and, amid the threats and the opportunities, have a critical role to play in shaping the use of AI: ensuring that it is employed in ways that align with their mission and standards of accessibility, fairness, transparency, and inclusivity. As existing platforms are updated and new platforms are released, libraries have a responsibility to continuously educate patrons and create awareness about the ethical implications of using generative AI. Librarians have a duty to promote responsible AI use and best practices surrounding attribution, fairness and transparency, data security and confidentiality, bias, misinformation, cybersecurity, copyrights, and intellectual property, among others.

Note

  1. ACRL Science and Technology Section, “STS Hot Topics Event 2023: A Thousand Threats, A Thousand Opportunities—A Look at Generative AI,” YouTube video, June 20, 2023, 29:27, https://www.youtube.com/watch?v=HWauVLZZpE4.
  2. Lavender Yao Jiang, Xujin Chris Liu, Nima Pour Nejatian, Mustafa Nasir-Moin, Duo Wang, Anas Abidin, Kevin Eaton, Howard Antony Riina, Ilya Laufer, and Paawan Punjabi et al., “Health System-Scale Language Models are All-Purpose Prediction Engines,” Nature 619 (2023): 357–62.
Copyright Mohammad Hosseini, Mark Chalmers, Thomas James Ferrill

Article Views (By Year/Month)

2026
January: 13
2025
January: 61
February: 30
March: 27
April: 38
May: 30
June: 35
July: 46
August: 62
September: 43
October: 49
November: 61
December: 45
2024
January: 4309
February: 165
March: 44
April: 36
May: 19
June: 13
July: 59
August: 21
September: 33
October: 11
November: 10
December: 6