Making sense of what you have
Developing a collection assessment program
© 2023 Alice Pearman
For the second half of 2022, I had the privilege of taking a six-month sabbatical to develop a collection assessment plan for the Lamson Library at Plymouth State University. Plymouth State educates an average of 3,800 undergraduate and 900 graduate students in approximately 45 major disciplines. The Lamson Library is currently staffed by five faculty librarians, five full-time staff, and several student workers.
Like most academic libraries, our materials budget has suffered from either flat or reduced funding during the past few years. Unfortunately, many of the cuts were last-minute emergencies that forced us to make decisions quickly, depending largely on cost per use data. Calculating cost per use is a valid assessment method discussed at length by Jacqueline Borin and Hua Yi.1 But without any other assessment, I feared the balance of subjects represented by our collection was becoming lopsided. We needed to assess our collection in other ways to determine if we were still meeting the needs of our students and faculty in their chosen disciplines.
The Complete Collections Assessment Manual: A Holistic Approach by Madeline M. Kelly was immensely helpful in getting my project started.2 As I read the text, cited works, and recommended readings, I concluded there is one question that must lay the foundation of all assessment efforts: “What do we have?” It is important to define parameters. I focused on formats that compose the bulk of our expenses: physical books and electronic monographs and journals. Audio-visual media, the K-12 Curriculum Collection, the K-12 book collection, government documents, and special collections were excluded.
To answer that foundational assessment question, an inventory must be taken. In this context, an inventory is more than a tally, although that is a good place to start. Our ILS provided a count of print and ebooks while data from EBSCO and other database providers provided the journal title counts. But this simple inventory is Defining an inventory by subject immediately opens a new can of worms. How to define a subject area? After much wrangling, foot stamping, online discussions with other librarians, and long walks, I decided to undertake the rather complex process of developing our own conspectus for our print monograph collection. While this will likely be one of the most time-consuming processes, creating our own conspectus will help us understand if books in broad subject areas, such as history, really are relevant to our history program. To test how this would work, I reviewed the courses offered in the history discipline and made a note for the corresponding classification. A course in medieval studies meant a tick in that section of the history classification. This won’t always be neat and tidy—some of our multidisciplinary programs, such as Adventure Education, will be challenging to categorize—but I believe our local judgment will be better than any other freely available alternatives. For journals, I depended on subject categories defined by our subscription agent or the database publisher, as applicable. Full-text journals in databases were included in the inventory. Once everything is broken down by subject, further examination of the recency of the work whether there is an embargo can shed further light on what is available. After completing an inventory, it will be clear if there are disciplines not well represented in the collection. Even for those disciplines with excellent representation, however, we still won’t know if those resources are useful. This leads us closer to assessing the original question: Are the resources available to our patrons meeting their disciplinary needs? To fully answer this question, measures of quality need to be taken. Quality can be defined in many ways, depending on your perspective. Kelly suggests several questions that could be asked of a collection, such as “Where are we not meeting demand?”; “Is the impact of the collection consistent across user groups?”; “Are there user groups, voices, or perspectives not represented in our collections?” and provides corresponding methods to find the answers.3 It is helpful to brainstorm questions with colleagues as well. All these questions will help assess the quality of a collection. At this point in the planning process, it is important to ensure the assessment program will include a mix of qualitative and quantitative data. It is also important to balance collection-based data (such as bibliographic analysis or brief tests) or user-based data (such as interlibrary loan transactions, or survey results). Peggy Johnson provides an excellent table of data mapped to data types.4 Scott Nicholson argues that “the first evaluation viewpoint that should be taken into account is the user evaluation.”5 Any assessment program should include both collection and use data. Before continuing, it’s also important to acknowledge a point made by Sonia Bodi and Katie Maier-O’Shea: “It would be simplistic to assume that there is one, set assessment formula that applies to all disciplines and their print and electronic resources equally.”6 Multiple assessment methods must be employed; not all may be relevant to all resources or subject areas. It took some time to select a list of methods that would not be too time-consuming while ensuring the appropriate mix of data. With an established list of methods determined (see table 1), it was time to run a pilot assessment on a subject area. A pilot could help me estimate how much time would be required to run each assessment method, what pitfalls might be encountered, and how useful the data might be at the end. Some methods turned out to be more challenging than imagined. The modified brief test depends on a sample of titles according to the number of WorldCat holdings.7 This should have been easy but turned out to be very difficult simply because the tools available to me made it very difficult to pull a random sample of titles available in a single classification. I ended up using the antique FirstSearch service, which has its own limitations but did what it needed to do. That experience demonstrated that the pilot would also help me document best practices for capturing the data required for each method, saving a lot of time in the future. Question Method Borin & Yi Indicator Method Type Data Source How many book, journal, database titles do we have? Inventory Capacity Quantitative Collection-Based Do we have a strong collection in this discipline? Modified brief testReputable BibliographiesE-Resource Environmental Scan CapacitySubject StandardsEnvironmental Factors QuantitativeQualitativeQualitative Collection-BasedCollection-BasedCollection-Based Where are we not meeting demand? Turnaway analysisILL analysisCitation analysisUser surveys UsageUsageUsageUsers QuantitativeQuantitativeQuantitativeQualitative Collection-BasedUser-BasedUser-BasedUser-Based The pilot was limited (I did not attempt to distribute user surveys or examine any disciplines beyond criminal justice), but it helped me comprehend the scope of the program I was contemplating. Kelly encourages would-be assessors to limit assessment projects to two or three points of data at one time.8 At first, I found this to be frustratingly limited, but after the pilot, I could see the point. The end purpose of answering my question is to tell a meaningful, persuasive story about our collections to stakeholders. Regardless of whether those stakeholders are other librarians or the decision-makers who determine library funding, the message needs to be succinct. Too little data would be simplistic, but too much will complicate the story. While it is very tempting to gather all the data at once, an ongoing assessment program will eventually answer all the aspects of a question. As our assessment program becomes established, we can begin to put results together to see how well our collection serves our patrons. There are shortcomings to my proposal. Education at Plymouth State is highly interdisciplinary. Students often require materials outside the boundaries of course definitions, particularly for senior capstone and graduate students. The program will need to be flexible, anticipating and responding to issues that arise in our profession, as the recent uptick in diversity, equity, and inclusion assessments attest. However, I believe answering our original question—Is our collection balanced? Is it serving the needs of our major areas of study?—will lay the groundwork for other questions that we have yet to consider.Table 1. Assessment Methods
Notes
Article Views (Last 12 Months)
Contact ACRL for article usage statistics from 2010-April 2017.
Article Views (By Year/Month)
2023 |
January: 0 |
February: 0 |
March: 0 |
April: 0 |
May: 0 |
June: 0 |
July: 0 |
August: 3 |
September: 646 |