Explainable AI: An Agenda for Explainability Activism

Michael Ridley

Abstract

If artificial intelligence (AI), particularly generative AI, is an opaque “black box,” how
are we able to trust it and make the technology accountable? Academic libraries are
evaluating, providing, using, and increasingly building AI-based information tools and
services. Typically, the underlying models for these intelligent systems are large language
models (LLMs) based on generative AI techniques. While many of these systems have
shown remarkable advances and advantages, the risks and deficiencies are also widely
known and easily demonstrated

Full Text:

PDF HTML

References

Peter Slattery, Alexander Saeri, Emily A. C. Grundy, Jess Graham, Michael Noetel,

Risto Uuk, James Dao, Soroush Pour, Stephen Casper, and Neil Thompson, “The AI Risk

Repository: A Comprehensive Meta-Review, Database, and Taxonomy of Risks from Artificial

Intelligence,” ResearchGate, August 2024, https://doi.org/10.13140/RG.2.2.28850.00968.

Tania Lombrozo, “The Structure and Function of Explanations,” Trends in Cognitive

Sciences 10, no. 10 (2006): 464, https://doi.org/10.1016/j.tics.2006.08.004.

Michael Ridley, “Explainable AI (XAI): Adoption and Advocacy,” Information Technology

& Libraries 41, no. 2 (2022): 1–17, https://doi.org/doi.org/10.6017/ital.v41i2.14683.

Markus Langer, Daniel Oster, Timo Speith, Holger Hermanns, Lena Kästner, Eva

Schmidt, Andreas Sesing, and Kevin Baum, “What Do We Want from Explainable Artificial

Intelligence (XAI)?—A Stakeholder Perspective on XAI and a Conceptual Model Guiding

Interdisciplinary XAI Research,” Artificial Intelligence 296 (July 2021): 1, https://doi

.org/10.1016/j.artint.2021.103473.

Edward H. Shortliffe, Stanton G. Axline, Bruce G. Buchanan, Thomas C. Merigan,

and Stanley N. Cohen, “An Artificial Intelligence Program to Advise Physicians Regarding

Antimicrobial Therapy,” Computers and Biomedical Research 6, no. 6 (1973): 544–60, https://

doi.org/10.1016/0010-4809(73)90029-3.

Sajid Ali, Tamer Abuhmed, Shaker El-Sappagh, Khan Muhammad, Jose M. Alonso-

Moral, Roberto Confalonieri, Riccardo Guidotti, Javier Del Ser, Natalia Díaz-Rodríguez, and

Francisco Herrera, “Explainable Artificial Intelligence (XAI): What We Know and What Is

Left to Attain Trustworthy Artificial Intelligence,” Information Fusion 99 (2023), https://doi

.org/10.1016/j.inffus.2023.101805.

Tim Miller, Piers Howe, and Liz Sonenberg, “Explainable AI: Beware of Inmates

Running the Asylum” paper, International Joint Conference on Artificial Intelligence, Workshop

on Explainable Artificial Intelligence (XAI), Melbourne, 2017, https://doi.org/10.48550

/arXiv.1712.00547.

Upol Ehsan, Elizabeth Anne Watkins, Philipp Wintersberger, Carina Manger,

Sunnie S. Kim, Niels van Berkel, Andreas Riener, and Mark O. Riedl, “Human-Centered

Explainable AI (HCXAI): Reloading Explainability in the Era of Large Language Models

(LLMs),” in Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing

Systems, CHI EA ’24 (Association for Computing Machinery, 2024), 2, https://doi.

org/10.1145/3613905.3636311.

Upol Ehsan and Mark O. Riedl, “Human-Centered Explainable AI: Towards a

Reflective Sociotechnical Approach,” in HCI International 2020—Late Breaking Papers:

Multimodality and Intelligence, ed. Constantine Stephanidis et al., Lecture Notes in Computer

Science (Springer International, 2020), 464, https://doi.org/10.1007/978-3-030-

-1_33.

Michael Ridley, “Human-Centered Explainable Artificial Intelligence: An Annual

Review of Information Science and Technology (ARIST) Paper,” Journal of the American

Society for Information Science (2024): 1–23, https://doi.org/10.1002/asi.24889.

Robert R. Hoffman, Timothy Miller, Gary Klein, Shane T. Mueller, and William J.

Clancey, “Increasing the Value of XAI for Users: A Psychological Perspective,” KI—Künstliche

Intelligenz (2023), https://doi.org/10.1007/s13218-023-00806-9.

Michael Ridley, “Protocols Not Platforms: The Case for Human-Centered Explainable

AI (HCXAI),” 2023, https://cais2023.ca/talk/10.ridley/10.Ridley.pdf.

Mike Masnick, “Protocols, Not Platforms: A Technological Approach to Free Speech,”

Knight First Amendment Institute at Columbia University, August 21, 2019, p. 6, https://

knightcolumbia.org/content/protocols-not-platforms-a-technological-approach-to-free

-speech.

Luca Nannini, Agathe Balayn, and Adam Leon Smith, “Explainability in AI Policies:

A Critical Review of Communications, Reports, Regulations, and Standards in the

EU, US, and UK,” in Proceedings of the 2023 ACM Conference on Fairness, Accountability,

and Transparency, FAccT ’23 (Association for Computing Machinery, 2023), 1198–1212,

https://doi.org/10.1145/3593013.3594074.2023

Latanya Sweeney, “How to Save Democracy and the World,” paper, ACM Conference

on Fairness, Accountability, and Transparency, New York University, 2018.

Gary Marcus, Taming Silicon Valley: How We Can Ensure That AI Works for Us (MIT

Press, 2024).

Copyright Michael Ridley

Article Views (By Year/Month)

2026
January: 157
2025
January: 0
February: 0
March: 0
April: 1306
May: 593
June: 244
July: 224
August: 196
September: 148
October: 151
November: 185
December: 203