sha

Teaching Students to Think Critically About AI

Practical Approaches for Academic Librarians in Designing Literacy Instruction

J. M. Shalani Dilinika is a PhD candidate in the University of Pittsburgh School of Computing and Information and a lecturer at the Department of Library and Information Science, University of Kelaniya, Sri Lanka, email: shj91@pitt.edu.

The widespread use of generative artificial intelligence (AI) applications has reshaped how people interact with and interpret information. Within higher education, these technologies are already influencing core academic practices, from research and writing to teaching and learning, raising important questions about authenticity, critical thinking, and responsible use. Although we have witnessed the prevalence of generative AI use by students, restricting access to these technologies is neither feasible nor practical, as their daily digital environments are already integrated with AI. Consequently, because education constitutes an integral part of an individual’s life, we can no longer compartmentalize AI use as an external tool but rather must thoughtfully embrace this evolving reality and cultivate students’ capacity for responsible use of AI. As these technologies become ubiquitous, an imperative question arises: How can students be equipped not just to use AI but also to understand, question, and engage with it critically?

Academic libraries, as long-standing stewards of information literacy within academic communities, are transitioning from information intermediaries to critical mediators of AI literacy.1 Critical AI literacy can be positioned as an extension of information literacy in the age of generative AI. I am of the view that bringing together constructivist learning theory and the principles of data justice gives academic librarians a powerful way to design literacy instruction that encourages constructive use of generative AI and reflects on the ethical and social impacts of these technologies. In this discussion, I unpack a range of practical approaches and methods that librarians can weave into their teaching and collaborative work to help students navigate AI technologies in a more thoughtful and responsible way.

Why Critical AI Literacy Matters

Critical AI literacy has grown out of traditional digital literacy, pulling together key aspects of information literacy, media literacy, and critical thinking into one framework for understanding and working with AI systems and applications. Nowadays, just being technically skilled isn’t enough when it comes to using AI-based tools. Although students can generate content from AI applications with minimal effort, many lack understanding of AI’s limitations, biases, and potential for misinformation. Recent research indicates that issues such as AI hallucination (when systems generate plausible but incorrect information), overreliance on AI outputs, and inherited biases are increasingly common. In this context, critical AI literacy extends beyond technical proficiency to critically assessing the generated output and applying it responsibly with an awareness of socio-ethical considerations. Duri Long and Brian Magerko, in defining AI literacy, highlighted critical literacy as individuals’ capacity to critically evaluate AI technologies.2 Hence, merely having technical proficiency in generative AI technologies is no longer sufficient to use them effectively; instead, questioning the output, understanding limitations, and recognizing broader socioethical implications are essential for individuals to engage with such technologies in a constructive and responsible way.

Designing AI Literacy: Where Constructivism Meets Data Justice

The intersection of critical AI literacy with constructivist theory and data justice principles provides a robust theoretical foundation for literacy instruction that promotes both the constructive use of AI technologies and thoughtful consideration of their broader implications.

Constructivist learning stresses building knowledge through active participation rather than passive consumption, pushing students to analyze and question AI outputs. In literacy education, this means encouraging active learning through critical perspectives.3 A key part of this is the idea of “productive struggle,” where students wrestle with difficult concepts, work through confusion, and gain understanding through guided discovery instead of direct instruction.

Data justice principles complement this active learning approach by helping students understand the human impact of data practices. This framework examines how data collection, storage, analysis, and distribution affect communities, with particular attention to fairness and equity. Linnet Taylor defined data justice as “fairness in the way people are made visible, represented, and treated as a result of their production of digital data.”4 When applied to AI literacy, data justice principles help students recognize that AI systems are not neutral; they reflect the values, biases, and limitations embedded in their training data and design processes. Supporting this, several studies have highlighted the potential of employing data justice principles as a foundation for designing literacy activities across multiple disciplines.5,6

Four Practical Instructional Approaches for Academic Librarians

Building on constructivist learning and data justice, the following four approaches can help librarians design critical AI literacy instruction. Each approach is flexible enough to fit different settings, from a single session to a full workshop series.

1. Design-Based Learning Activities

Design-based learning activities engage students in open-ended, inquiry-driven projects where they identify problems, empathize with users, and create solutions. These approaches work particularly well for critical AI literacy instruction because they provide natural opportunities for students to develop critical thinking skills and to understand the social and ethical impacts of generative AI.

Design-based approaches such as Value-Sensitive Design, Speculative Design, and User-Centered Design give librarians practical ways to help students think critically about AI. For example, librarians can use case studies to guide students in spotting problems or examining issues like bias and privacy. From there, students can prototype simple solutions, even paper-based ones, to think through how AI could be made fairer and more responsible. Reflection prompts along the way encourage them to ask not only how their design works but also whose values it reflects and what impacts it might have.

Speculative design adds another dimension by encouraging students to imagine future scenarios and the societal impacts of emerging technologies. Design activities can serve as a medium to spark discussion about the social, cultural, and ethical implications of these technologies. Many researchers have identified speculative design as a critical design domain and a tool for exploring “What if?” questions and the implications of different scenarios.7 In critical AI literacy instruction, librarians can encourage learners to envision hypothetical AI-based scenarios—for example, imagining a future where AI systems determine the distribution of social benefits based on a predictive algorithm. Learners are encouraged to speculate about future possibilities and engage through digital storytelling and narration throughout the process, questioning and critically examining issues of fairness, bias, and social justice in AI-driven decision-making.

2. Metacognitive Activities and Ethical Reflection

The concept of metacognition can be defined simply as “thinking about thinking,” which is a form of self-regulated behavior. Metacognition enables individuals to monitor their actions, reflect on them, gain control over their behavior, and modify it based on that reflection.8,9 In critical AI literacy instruction, librarians can incorporate metacognitive activities in different ways. Activities such as guided reflection on AI-generated texts (AI output analysis), metacognitive journals, think-aloud exercises, source evaluation checklists, and peer discussions prompt learners to monitor their reasoning, identify biases, and consider the ethical implications of AI-generated content. Students can be prompted with questions such as the following:

  • How might this content influence decisions or perceptions if shared without verification?
  • In what ways could your personal viewpoints or prior experiences shape how you interpret this information?
  • What information might be missing or overlooked in this AI-generated text?

These output analyses add an important layer of critical thinking to literacy instruction while supporting the constructive use of AI. This connects with the idea of information discernment, which highlights how human judgment, bias, and decision-making shape how we evaluate information.10 In the age of generative AI, it is especially important to verify AI outputs and apply human judgment to make informed decisions. These perspectives also align with the ACRL Framework for Information Literacy, which emphasizes meta literacy and metacognition, or critical self-reflection, as key for becoming self-directed learners in today’s fast-changing information ecosystem.11

3. Maker Activities

Maker activities are basically hands-on learning experiences where people build knowledge by designing or creating a product or artifact. The idea of making or the maker movement comes from constructivism, which stresses learning through experience, exploration, and reflection. In critical AI literacy instruction, maker activities can help learners build both practical skills and critical awareness for using emerging technologies responsibly. For example, Clifford H. Lee, Nimah Gobir, Alex Gurn, and Elisabeth Soep introduced students to the hidden elements of recommender systems and the surveillance risks of facial recognition. In response, students worked together to create multimedia artifacts and prototypes that expressed their critical standpoint.12

4. Dialogical Activities

Based on the theories of Lev Vygotsky and other social constructivists, dialogical learning emphasizes that learners actively build knowledge through social interaction and cultural context rather than passively absorbing information. In the context of critical AI literacy, dialogical activities help learners critically evaluate AI-generated content or work with case studies and scenarios to explore the socioethical implications of AI outputs. These activities can include role play, peer discussions, data storytelling, data mapping, and online or in-person forums, enabling learners to engage, share perspectives, and collaboratively reflect on complex issues.13

As discussed earlier, these literacy activities can be adapted across disciplines and levels in planning AI literacy instruction. They work in different formats, from one-shot sessions to full classroom instruction or tailored approaches. Librarians can also use generative AI to design prompts and create scenarios. Activities may be scaled up or simplified depending on context, resources, and learner level.

Assessing the Success of Instructional Approaches

Learning goals should be set before instruction, and evaluation methods such as observations, exit surveys, self-reflections, or quizzes are important for measuring progress and gathering feedback based on context (e.g., student level, discipline). These assessments must remain low-stakes and nonjudgmental, with the goal of supporting students. Most importantly, critical AI literacy instruction needs to be flexible, encouraging learners to be innovative, creative, and open to making mistakes and learning from them. Engaging in constructive and reflective practices in this way is essential for developing critical AI literacy skills through real-world experiences.

Conclusion

As society is increasingly shaped by emerging technologies, critical AI literacy has become a civic imperative. Students need support to move beyond technical skills and critically engage with AI by analyzing outputs, recognizing biases, and reflecting on ethical and social implications. Approaches such as design-based, dialogical, metacognitive, and maker activities provide opportunities for learners to question and explore the socio-ethical dimensions of AI through real-world experiences.

Notes

1. Nuno Miguel Teixeira Sousa, “Academic Libraries as Hubs of Artificial Intelligence Competency,” Discover Artificial Intelligence 5, no. 1 (2025), doi:10.1007/s44163-025-00490-8.

2. Duri Long and Brian Magerko, “What Is AI Literacy? Competencies and Design Considerations” (in CHI ’20: CHI Conference on Human Factors in Computing Systems, New York, 2020), doi:10.1145/3313831.3376727.

3. Aayushi Dangol and Sayamindu Dasgupta, “Constructionist Approaches to Critical Data Literacy: A Review” (in IDC ’23: Interaction Design and Children, New York, 2023), doi:10.1145/3585088.3589367.

4. Linnet Taylor, “What Is Data Justice? The Case for Connecting Digital Rights and Freedoms Globally,” Big Data & Society 4, no. 2 (2017), doi:10.1177/2053951717736335.

5. Javiera Atenas, Leo Havemann, and Chrissi Nerantzi, “Critical and Creative Pedagogies for Artificial Intelligence and Data Literacy: An Epistemic Data Justice Approach for Academic Practice,” Research in Learning Technology 32 (2025), doi:10.25304/rlt.v32.3296.

6. Federica Picasso, Javiera Atenas, Leo Havemann, and Anna Serbati, “Advancing Critical Data and AI Literacies through Authentic and Real-World Assessment Design Using a Data Justice Approach,” Open Praxis 16, no. 3 (2024): 291–310, doi:10.55982/openpraxis.16.3.667.

7. Annemiek Veldhuis, Priscilla Y. Lo, Sadhbh Kenny, and Alissa N. Antle, “Critical Artificial Intelligence Literacy: A Scoping Review and Framework Synthesis,” International Journal of Child-Computer Interaction 43 (2025): 100708, doi:10.1016/j.ijcci.2024.100708.

8. John H. Flavell, “Metacognition and Cognitive Monitoring: A New Area of Cognitive-Developmental Inquiry,” American Psychologist 34, no. 10 (1979): 906–11, doi:10.1037/0003-066x.34.10.906.

9. Sidra Sidra and Claire Mason, “Reconceptualizing AI Literacy: The Importance of Metacognitive Thinking in an Artificial Intelligence (AI)-Enabled Workforce” (in 2024 IEEE Conference on Artificial Intelligence [CAI], 2024), doi:10.1109/cai59869.2024.00211.

10. Chen-Chen Liu, Dan Wang, Gwo-Jen Hwang, Yun-Fang Tu, Ning-Yu Li, and Youmei Wang, “Improving Information Discernment Skills: Through a Concept Mapping-Based Information Evaluating Framework in a Gamified Learning Context,” Interactive Learning Environments 32, no. 9 (2023): 4766–88, doi:10.1080/10494820.2023.2205900.

11. “Framework for Information Literacy for Higher Education,” Association of College & Research Libraries (2015).

12. Clifford H. Lee, Nimah Gobir, Alex Gurn, and Elisabeth Soep, “In the Black Mirror: Youth Investigations into Artificial Intelligence,” ACM Transactions on Computing Education 22, no. 3 (2022): Article No.: 25, Pages 1-25, doi:10.1145/3484495.

13. Atenas, Havemann, and Nerantzi, “Critical and Creative Pedagogies for Artificial Intelligence and Data Literacy.”

Copyright J. M. Shalani Dilinika

Article Views (By Year/Month)

2026
January: 0
February: 0
March: 0
April: 3
May: 0