Seven questions for assessment planning: A discussion starter

Mary O’Kelly


Do a quick Google search for assessment cycle or evaluation cycle and you’ll find thousands of variations. It’s easy for a newly emerging culture of assessment to stall as the participants agonize over which is the right way, which is the most thorough way, which is the perfect way to evaluate an instruction program.

I’ve been through many assessment processes and have experienced those long pauses firsthand. I have come to realize that the first and most important step is to simply have a conversation. Yes, there are rigorous assessment projects that require exceptionally detailed methods and a close involvement with the institutional review board, and there are myriad models that have language similar to these questions and to each other. Yet so much of building and measuring an instruction program starts with everyone on the team—regardless of their level of assessment expertise—knowing what we’re doing and why and being able to clearly articulate it.

Instruction and assessment scholars have written about the critical importance of collaboration in building a culture of assessment, with a common emphasis on collegial, transparent processes.1 Whether leading a team of experienced evaluators or building a new assessment project from the ground up, careful reflection up front can facilitate smoother communication down the road.

Seven questions

The more assessment projects I participated in, the more I sought out examples of cycles, guiding questions, processes, and best practices. I found myself returning to the fundamental questions of who, what, when, where, why, how, and how well, and eventually those morphed into these seven questions.

These seven questions can be a discussion starter or a thought exercise.2 The intent is to walk through the questions and document the answers. The resulting statement can then be used in multiple ways: as a management update, in a self-study report, in an assessment report, or as a small part of a larger initiative.

  1. Responsibility: Who is taking responsibility and why?
  2. Questions: What questions do we have about our own program and why?
  3. Data: What information do we need to answer those questions and why?
  4. Method: How will we get it and why?
  5. Results: Who will write the answers and why?
  6. Communication: Who needs to see the results and why?
  7. Cycle: What is our timeline for changes and why?

The following answers demonstrate a short form of the process I used when planning a three-year assessment of student retention and library instruction at Grand Valley State University (GVSU). The answers are easy and focused.

Who is taking responsibility and why?

At GVSU, the head of instructional services has primary responsibility for evaluating the instruction program, although many others are involved.

What questions do we have about our own program and why?

For this project we want to know whether the library is a factor in student retention because retention is a primary focus at our institution.

What information do we need to answer those questions and why?

In order to answer that question, we need a list of all the classes that have had library instruction this year so that we know which students we reached.

How will we get it and why?

With the cooperation of all instruction librarians, who have the task of logging every instruction session, we will collect data using LibAnalytics.

Who will write the answers and why?

We don’t have access to Banner3 data and we don’t have any library statisticians, so we have built a relationship with experts in our Institutional Analysis department, who will analyze the results for us.

Who needs to see the results and why?

We will share the analysis with the entire Research and Instruction team, plus the Library Council (which includes the dean), the entire library staff, university faculty, and finally the library community in order to communicate our contribution (if any) to student retention.

What is our timeline for changes and why?

We will repeat this cycle annually at the end of the academic year so that we can show trends over time and so that we can track changes to our program.

Combine those answers into one cohesive statement, however, and we have a powerful summary of our actions and intent.

At GVSU, the head of instructional services has primary responsibility for evaluating the instruction program, although many others are involved. For this project we want to know whether the library is a factor in student retention because retention is a primary focus at our institution. In order to answer that question, we need a list of all the classes that have had library instruction this year so that we know which students we reached. With the cooperation of all instruction librarians, who have the task of logging every instruction session, we will collect data using LibAnalytics. We don’t have access to Banner data and we don’t have any library statisticians, so we have built a relationship with experts in our Institutional Analysis department, who will analyze the results for us. We will share the analysis with the entire Research and Instruction team, plus the Library Council (which includes the dean), the entire library staff, university faculty, and finally the library community in order to communicate our contribution (if any) to student retention. We will repeat this cycle annually at the end of the academic year so that we can show trends over time, and so that we can track changes to our program.

With a statement like that at the ready for each project, large or small, an assessment team would have shared language and nonassessment staff would have a clear understanding of the purpose and process.

The seven questions in action

These questions are easy to adapt to micro-projects and large-scale assessment projects alike. Every year I ask our Institutional Analysis department 23 questions about instruction, ranging from simple (How many students did we reach in direct face-to-face library instruction?) to complex (Is there an intensity effect on GPA and retention of students who saw a librarian in class multiple times?).

When I first started planning the annual assessment of library instruction I used an early variation of these seven questions when talking with my colleagues. I was new in the role and needed a manageable way to get started, so I focused on what we knew, what we wanted to know, and what we hadn’t yet learned. What did we want to know about our instruction program? And what assumptions needed to be challenged?

We had assumed, for example, that the library reached most freshmen through Writing 150 (an introductory composition class). The class – or its equivalent – is required, so it seemed natural to believe that it was the library’s main point of contact for freshmen. Shortly after I started as head of instructional services I had an informal conversation with our first year initiatives coordinator (also a new position at the time) about Writing 150. We both wondered just how many of those students met a librarian in class. We identified our question, figured out what data we needed to answer the question, and listed the people who needed to know what we found. We were shocked to learn that we reached only 33% of freshmen via Writing 150, due to transfer credit, students testing out of the class, and alternatives offered by the honors college and other specialty programs. With just a few guiding questions we were able to articulate a plan, solicit help, and communicate what we learned.

I also have used these questions as a tool for myself. Sometime as I start a project, especially if it’s in unfamiliar territory, I like to sit quietly and write out a starter plan. Others may join me later, but my initial priming helps me stay focused on the outcome. Recently we completed SAILS on our campus. SAILS is a well-established instrument, so rather than focus on methods I instead focused on the reasons for using SAILS and how our results would be communicated. After putting together a team of volunteers to work on implementation and evaluation of the results, we then answered the seven questions as a group. Our conversation was fluid and the outcome wasn’t as linear as the question list implies, but we still ended up with a summary of our process, a plan, and a timeline. It was a good way to share language and expectations about a big project.

Those examples also illustrate how discussion starters can be used for gathering relatively simple descriptive data, the kind that can be valuable when making decisions about a program but might not necessarily fit into the category of “assessing student learning.” Planning and communication are important regardless of the project’s magnitude.

I participated in the first cohort of ACRL’s Assessment in Action program4 and have been involved in many assessment activities at our university since then. Resources such as Megan Oakleaf and Neal Kaske’s “Guiding Questions for Assessing Information Literacy in Higher Education”5 provide similar question-based options and have helped me expand my assessment vocabulary with more nuanced language and deeper investigation of process. However, I still find myself going back to these seven questions as a flexible way to lay a foundation for just about any data-gathering project.

As libraries are feeling increasing pressure to carefully document and communicate their value using sophisticated measures, having a ready-to-use process that is easily accessible to all staff can contribute to the development of a healthy culture of assessment.


Notes
1.
Two articles that detail these kinds of processes areFarkas, MG.. Hinchliffe, LJ.. , “Library Faculty and Instructional Assessment: Creating a Culture of Assessment through the High Performance Programming Model of Organizational Transformation. ,” Collaborative Librarianship 5, no. 3 ( 2013 ): 177-88 –, andGilchrist, D. , “A Twenty Year Path: Learning About Assessment; Learning From Assessment. ,” Communications in Information Literacy 3, no. 2 ( 2009 ): 70-79 –.
2.

The seven questions were first presented at the Michigan Library Association Academic Libraries 2014 Conference, May 2014.

3.

Banner is an enterprise resource planning system used in higher education to manage student information, such as course registration, grades, major, transcripts, and advising.

4.

“Assessment in Action: Academic Libraries and Student Success” is undertaken by ACRL in partnership with the Association for Institutional Research and the Association of Public and Land-grant Universities. The program, a cornerstone of ACRL’s Value of Academic Libraries initiative, is made possible by the Institute of Museum and Library Services. For more information see http://www.ala.org/acrl/AiA.

5. Oakleaf, M. Kaske, N. , “Guiding Questions for Assessing Information Literacy in Higher Education. ,” portal: Libraries and the Academy 9, no. 2 ( 2009 ): 273-86 –.
Copyright © 2015 Mary O’Kellly

Article Views (2017)

No data available

Contact ACRL for article usage statistics from 2010-April 2017.