ACRL

College & Research Libraries News

Output or performance measures: The making of a manual

By Virginia Tief el

Chair

ACRL Ad Hoc Committee on Performance Measures

The genesis of the ACRL output measures manual.

Accountability has been one of our society’s major concerns in the 1980s, with higher education— and, within higher education, academic libraries—being scrutinized as perhaps never before. Librarians should not be surprised at this development, as libraries generally have much larger budgets than most individual academic units, though most libraries feel they must still plead “hard-times.” Librarians are well aware that libraries are often seen by academic administrators as veritable bottomless pits whose existence has unquestionable value but whose impact on the quality of education is difficult to access. On another level, some state governments are demanding accountability from their educational institutions, with one result being that librarians in these states are facing the real possibility of “library measurement” being established by legislators or governmental agencies beyond the parent college or university.

A survey of the professional literature and programs of recent professional meetings provides ample evidence of librarians’ response to this challenge. Taking the lead in addressing this issue, the Association of College and Research Libraries has undertaken a variety of related activities, among which are plans to publish a manual on library output (performance) measures, scheduled for release in 1990. To coincide with the publication of the manual and publicize this important event, a twopart program is being planned for the ALA Annual Conference in 1990. The program will be jointly sponsored by the University Libraries, College Libraries, and Community and Junior College Libraries sections of ACRL, in collaboration with ACRL’s ad hoc Committee on Performance Measures. This article will describe the manual, how it came into being, and some of the challenges and issues encountered in the process.

The Task Force

Recognizing the importance of the issues associated with accountability, the ACRL leadership initiated action in 1983 by appointing a Task Force on Performance Measures for Academic Libraries. Chaired by Robert W. Burns Jr.,1 the task force was charged to determine whether ACRL should issue a manual on performance measures for academic libraries, and, if so, to recommend a plan of action to develop such a manual. Two important goals which ACRL wanted to accomplish were to stimulate librarians’ interest in performance measures and then, if needed, provide practical assistance so that they could conduct meaningful measurement of their libraries’ performance. Librarians could then use the measurement results in planning, internal decision-making, and communicating with institutional administrators. In accomplishing its charge, the task force was asked to evaluate existing performance measures manuals for their applicability to academic libraries and recommend what performance measuresrelated activities, such as conferences, workshops, and research projects, should be considered by ACRE.

In its work, the task force used the definition of performance measures found in the Library Data Collection Handbook-. “Counts and combinations of counts which enable a library to assess the degree to which a program meets its objectives….”2 The task force further specified that they viewed performance measures as quantitative in nature and applicable to the description of library services (output), resources (input), and internal operation (throughput). The task force noted that the term “performance measure” is frequently misunderstood, and that “activity measures” or “service measures” might more accurately denote what the task force had in mind. The task force emphasized the need to differentiate between “performance measures” and “standards” to promote wider understanding that measures are meant to provide objective data that can assist those responsible for planning, day-to-day management, and communication. Some examples of specific performance measures given were: number of requests for information, time required to fill requests, number of requests filled, number of people waiting in lines, and number of documents delivered.

After examining the issues, the task force recommended that ACRL sponsor the development of a manual on performance measures for academic libraries. The task force reported a longstanding need for such a tool, heightened by tighter budgets of the 1980s, to aid academic libraries in describing their activities quantitatively. Although the task force identified a vast amount of literature on performance measures, ranging in content from the very simple to the highly sophisticated, it emphasized the need for a manual on performance measures specific to academic libraries. The report recommended that the measures selected for inclusion in a manual meet certain criteria: they should be decision-related; focus on outputs and service to library clients; be easy to apply, use, and understand; inexpensive to administer; appropriate to all types and sizes of academic libraries; and, generally, be judged “useful” to library managers. What follows is a chronological review of how the manual on performance measures was developed.

The Committee

Following the recommendation of the task force, the ACRL Board appointed the ad hoc Committee on Performance Measures in 1984, with a two-part charge based on the task force report.3 In brief, the committee was to define, describe, and monitor the writing of a manual on performance measures for academic libraries. The committee was also to work with other ALA divisions and committees to promote and consult on any other work or programs that might be related to performance measures. The ad hoc committee was to complete its task by the 1989 summer conference.

The committee decided to give first attention to developing a manual, and chose to focus on selection of approximately 12 performance measures (the number specified in the committee’s charge) to be included. The committee’s second responsibility, promoting interest in and use of performance measures, would receive attention after the development of the manual was underway. In all of its work the committee attempted to involve many other people. Anyone who attended a committee meeting or asked to be placed on its mailing list received all committee-generated papers, including agendas, minutes, drafts of documents, etc. By the end of its work, more than 30 additional people were receiving all mailings sent to the committee members. In addition, as the committee worked on the manual, members specifically sought out other ACRL committees whose work might be related to performance measures and established communication with them.

The committee began its work with extensive reading of the literature and consideration of a number of performance measures for possible inclusion in the manual. Special attention was given to two handbooks: Kantor’s Objective Performance Measures for Academic and Research Libraries (1984) and Zweizig’s Output Measures for Public Libraries (1982).4 The committee came to agree that the ACRL manual should occupy a middle ground (in complexity) between these two valuable works and that measures be written from the perspective of the library user and be termed “output measures.”

The committee identified as goals for ACRL performance measures the following:

•To measure the impact, efficiency, and effectiveness of library activities.

•To emphasize that measures, not standards, were at issue.

•To demonstrate/explain library performance in meaningful ways to university administrators.

•To provide measures that can be used by heads of units to demonstrate performance levels and resource needs to library administrators.

•Generally to provide data useful for library planning.

The committee then identified goals specific to the manual:

•To present measures that are useful for and replicable in all types and sizes of academic and research libraries.

•To present measures that are decision-related.

•To present measures that are easy to apply and use, inexpensive to administer, and user-oriented.

•To present measures that are linked to a library’s goals and objectives.

•To facilitate use of the measures for historical comparisons within a library unit or institution.

Each member then took a topical area, e. g., user skills, technical services, and reference, for more concentrated reading and study. After examining their topics, members recommended possible output measures for each keeping in mind the committee’s resolve that the measures be focused on users. The committee ranked the recommended output measures in priority order and selected the twelve as the most critical for this first version of the manual. Those measures not included were recommended as “related measures for possible consideration,” if additions were possible.

In considering criteria for the manual, the committee repeatedly stressed that the manual must be applicable to academic libraries of all sizes and that it should stress measurement of what a library does, not what librarians do. The manual was to be user-oriented and should not imply standards, but explain how one can measure organizational performance. The manual should prescribe the methodology for application of specific measures and describe how statistical data are used in each measure. References to relevant selected literature were to be associated with each measure, but the committee felt strongly that care be taken that the manual not duplicate existing publications. The manual should emphasize that it would not be a comprehensive planning guide, that readers would need to consult other sources (cited) to learn about establishing goals and objectives, details of cost analysis, and the like. In the end, the manual, to be accepted and used, should provide encouragement to librarians and give practical suggestions about how measures might be applied; for example, how a library administrator might use information on output measures to communicate with college/university administrators. Finally, the committee stressed that the manual would represent only a first step by which a library can measure its performance.

The shaping of a manual

In January 1987 the committee presented the ACRL Board with a document describing the 12 output measures recommended for the manual. The measures were defined in fairly broad, conceptual terms. The committee concurred with the Board’s subsequent recommendation that the measures should be described in more specific terms. The committee also recommended that, as this was done, the measures be set more explicitly in the context of existing literature on output measures (to avoid duplication and redundancy). A contract for a specialist who would refine the measures and place them in the context of other manuals was then put out for bid.

The proposal submitted by Nancy Van House, associate professor at the School of Library and Information Studies, University of California, Berkeley, was selected. Van House’s report was submitted to the committee in June 1987 and was found by all committee members to have achieved its purpose very well. The committee, working with the report, refined and made final the manual’s description. The manual was to include the following: 1) a description of its goals and objectives; 2) a bibliographic essay to provide a framework for the measures included; 3) a clear description of each measure, to include information about how to use it, how to obtain data, what to do with results, and what skills would be needed to administer the measures; 4) an extended bibliography; 5) a glossary; and 6) an index.

Based on the committee’s description of the manual, ACRL issued a Request for Proposals in Fall 1987. By the end of 1987 the committee had reviewed responses and selected Nancy Van House’s proposal.5 At the 1988 ALA Midwinter Meeting the committee recommended that the ACRL Board authorize the ACRL office to enter into negotiations with Van House. Based on a positive Board response, the ACRL office negotiated with Van House and came to an agreement on terms in early spring. Van House then began work on the first draft of the manual.

The committee and Van House agreed there should be two tests for the manual: first, to test the proposed measures themselves in selected academic libraries, and, then, when the measures were made final, a testing of the manual itself to determine whether it conveyed what was intended. The committee reviewed Van House’s outline of the manual and a summary of the measures in summer 1988. In the fall, testing of the measures began. The committee reviewed the measures again at the 1989 Midwinter Meeting. Testing of the measures continued, with the intent to test the manual itself in late spring 1989. The committee will review these results at the 1989 Annual Conference. Any needed revisions will be completed by the end of summer and the finished manual delivered to the ACRL Executive Committee in October 1989 for action at the Committee’s fall meeting.

The promotion of output measures

In late 1986, the committee turned some of its attention to the second part of its charge—“to recommend programs, policy, and projects related to performance measures for academic libraries”—as well as to work with ALA divisional committees to identify and promote activities on topics related to output measures, such as statistical techniques, data collection, and tools useful for implementing measures. To help in this promotion, the committee sent letters in Spring 1987 to the presidents of all ALA divisions, with copies to staff liaisons/executive directors. The letter asked for identification of any performance measure-related activities within each division. The response to these calls for information indicated less activity than had been anticipated, but showed that there was some work being done in such areas as standards—and much interest in output measures. Individual committee members were assigned liaison roles to any group which had reported possible future activity related to performance measures.

To bring more visibility to the issue of performance measures, the committee undertook other approaches as well. An incisive article written by committee member Beverlee French on the work of the committee and its review of the performance measures literature appeared in C&RL News in 1987.6 Committee members worked with the University Libraries Section’s Current Topics Discussion Group to offer a program at the 1988 Midwinter Meeting.7 Committee members also gave individual presentations at other ALA meetings and to other interested groups. Also, as mentioned at the outset, committee members are collaborating with committees from three ACRL sections in planning a two-part program on output/performance measures at the 1990 Annual Conference. An article on performance measures in the Chronicle for Higher Education8 mentioned the committee and brought several inquiries from librarians across the country who were eager to use such a manual to help address increasing pressure for measurement from administrators and state governments.

Conclusions

What has been learned from the almost five-year work of the committee? From my perspective as chair of the committee, there is a greater awareness of the ever-growing demand for accountability, which increases the pressure for more measurement in libraries. Almost everyone with whom I have talked sees measurement itself as an increasingly important and prominent issue. Response to this increased scrutiny is some apprehension that if librarians do not seize the initiative, “measurement” will be done by others far less knowledgeable about libraries, with possibly very adverse consequences. There also seems to be a lingering concern on the part of many librarians about how to measure and how much it will cost, and even uncertainty about the purposes and possible results of measurement. There is clearly a growing interest in how well libraries are meeting users’ needs, the role and image of the library, and (in many institutions) apparent dwindling of traditional administrative understanding and support.

One of the continuing challenges the committee faced was to insure that output measures are understood correctly, especially to be clear that they are not “standards.” Early on, the committee members became aware that close association with the issue of standards could become (in one member’s description) “the kiss of death” for the manual. Any perception that such a manual might have the prescriptive, judgmental connotation of standards could highly prejudice any decision to use it. The Task Force Report had strongly cautioned ACRL about this issue—and was right on target.

The Committee and author encountered the usual problems of communication across distances, especially with the restrictions of meeting only at ALA midwinter and annual conferences (no special meetings were ever called). Focusing on a complex topic to which all members brought different expertise and perspectives, the committee was able to reach consensus on all major issues necessary to produce the manual. The spirit of common purpose and belief in the project, combined with a flexible and reasonable attitude on the part of all involved, enabled the committee to fulfill its charge and complete a project that truly reflects the best of all members’ efforts.

The help of the ACRL staff, especially Mary Ellen Davis, program officer, and JoAn Segal, executive director of ACRL, was invaluable. The opportunity to work with a scholar-writer of the caliber of Nancy Van House—who truly captured the thinking of the committee—was exciting and professionally rewarding. The committee is excited about the manual and its focus on the user and the user’s perspective. The committee is hopeful that the manual will be well received and widely used. Watch for the manual’s publication in 1990 and the programs on output measures at the 1990 ALA Annual Conference.

Notes

  1. ther members of the task force were Joan C. Durrance (University of Michigan SLIS), Ruth A. Fraley (New York Office of Court Administration), Willis M. Hubbard (Gettysburg College), Charles R. McClure (Syracuse University SIS), L. Yvonne Wulff (University of Michigan), and Douglas L. Zweizig (University of Wisconsin).
  2. Mary Jo Lynch, Library Data Collection Handbook (Chicago: ALA, 1981), 178.
  3. The eight members of the committee are: Mignon Adams (Philadelphia College of Pharmacy and Science), Beverlee French (University of California, Davis), David Kaser (Indiana University SLIS), Patricia M. Kelley (George Washington University), Lynn Marko (University of Michigan), Jacquelyn Morris (Occidental College), Jerome Yavarkovsky (New York State Library), and Virginia Tiefel (Onio State University).
  4. Paul B. Kantor, Objective Performance Measures for Academic and Research Libraries (Washington, D.C.: Association of Research Libraries, 1984); Douglas Zweizig and Eleanor Jo Rodger, Output Measures for Public Libraries (Chicago: ALA, 1982).
  5. Nancy Van House is the primary author of the manual. Charles McClure and Beth Weil are coauthors.
  6. Beverlee French, “Library Performance Measures,” C&RL News 48 (February 1987): 72-74.
  7. “Why Measure Performance?” ALA Midwinter Conference, January 9, 1987.
  8. Judith A. Turner, “Academic Libraries Urged to Study Needs of Users and Set Performance Measures,” Chronicle of Higher Education, January 27, 1988, 3.
Copyright © American Library Association

Article Views (By Year/Month)

2026
January: 4
2025
January: 3
February: 7
March: 9
April: 7
May: 8
June: 21
July: 17
August: 15
September: 17
October: 22
November: 21
December: 24
2024
January: 1
February: 0
March: 2
April: 8
May: 2
June: 5
July: 2
August: 4
September: 3
October: 2
November: 2
December: 2
2023
January: 1
February: 2
March: 1
April: 3
May: 0
June: 0
July: 1
August: 1
September: 3
October: 3
November: 1
December: 4
2022
January: 5
February: 2
March: 2
April: 3
May: 5
June: 1
July: 6
August: 7
September: 4
October: 1
November: 1
December: 3
2021
January: 6
February: 4
March: 3
April: 1
May: 3
June: 3
July: 3
August: 1
September: 6
October: 5
November: 3
December: 2
2020
January: 10
February: 4
March: 5
April: 2
May: 5
June: 4
July: 5
August: 2
September: 2
October: 4
November: 5
December: 11
2019
January: 0
February: 0
March: 0
April: 0
May: 0
June: 0
July: 0
August: 5
September: 5
October: 7
November: 5
December: 8