Two universities in one place: Reflections on seasonal variations in the use of library resources
Freshmen and other migratory birds—How differentiated are our patrons?
It is commonplace that academic libraries serve multiple communities: taken as groups, faculty, students, and members of the general community have different wants and expectations. We all understand that some of the resources we offer are intended to help some of these groups much more than others.
We expect that some resources—Middle English dictionaries, databases for searching chemical structures, or investment spreadsheets full of betas and other arcane measures—will appeal only to sophisticated specialists, while other resources such as InfoTrac’s OneFile or EBSCO’s Academic Search Complete will have “something for everybody,” especially if that “everybody” is an undergraduate.
It makes sense then that as the population of our campuses changes over the course of the year, the mix of library resources most in demand will change. That demand for the most basic, general databases falls when our students are away on vacation surely qualifies as the sort of unsurprising finding that journalists derisively label “dog bites man.”
The point of this small study is briefly to demonstrate just how differently the library is used over the course of the year, presumably due to changes in campus subpopulations, and to reflect on what this means for library services. Although the existence of these differences will come as no surprise, their extent may be unexpected and may justify some thought about how best to serve our highly disparate subpopulations of users.
The data
We have tracked the use of our online databases at Virginia Tech since 2001, and currently record the university’s usage of 199 resources. Depending on vendor’s “push” or “pull” technologies, we receive statistics via e-mail or visit the administrative modules of vendors’ Web sites. Naturally the technologies involved in this work have changed, and we have seen other changes such as the evolution of the COUNTER standards and the appearance of third-party vendors (such as Scholarly Stats or SerialsSolutions) that gather and present statistics for a variety of our resources. Since the beginning we have translated our vendor or publisher reports into “documents” (downloads), “searches,” or “sessions,” and tracked only the one most appropriate metric among those available. All of our tracking is of monthly data.
While this account of Virginia Tech’s practices may make it sound as if we possess nearly a decade’s worth of impeccable time series data ripe for any kind of analysis, those who are involved in tracking their own institution’s data will understand that this is not the case. We cancel some databases. Others combine, split, or change their scope. Vendors forget to report, Web sites change and lose their ability to report data for months at a time, publishers switch from one metric to another, and in general Murphy keeps himself busy.
Despite these difficulties, we have ample data to show the seasonality of use for many of our resources and to demonstrate how widely this varies from one resource to the next. In order to analyze these data in a manageable manner, the following rules of thumb were adopted:
- The analysis was restricted to FY 06, 07, and 08.
- For each year, the peak month of total activity (always March or April) was identified and compared to activity in July, which was invariably the slowest month—note that peaks and valleys were defined for the sum of all activity, and so would not always match the peak or trough month for a specific resource.
- The analysis was restricted to resources that had been maintained throughout the three-year period and had no missing data for the relevant months: about a third of the resources were not susceptible to the analysis because of this.
The data show strong seasonality for the overall use of our online resources, with use in the peak month being about 215 percent of July use for each of the years in question (the summer-to-spring differences are dramatic throughout all years of our data, not simply the three fiscal years involved in the analysis). A number of our resources are above this ratio some years and below in other years. But there are some resources whose use is always dramatically more seasonal and other resources whose use hardly varies from month-to-month.
In order to illustrate these differences, I have selected only the most extreme examples. Table 1 illustrates the databases whose use is so seasonal that use in the peak month exceeded July use by a factor of at least 3:1 in each of the sampled years.
More than half the databases with highly seasonal use provide full-text, allowing for “one stop shopping” on a topic of interest. Several others are hybrids, not completely full-text but with many articles either resident or readily accessible through outbound open URLs.
The general picture the data suggest is that students are looking for databases that will answer their needs with a single search. The appearance of JSTOR, which is not nearly comprehensive in any discipline, but which will reliably bring up high-quality, full-text articles on almost any subject, in the list of highly seasonal databases, is hard to understand in any other light.
At the other end of the scale, Table 2 shows those databases in which Virginia Tech use in July actually exceeded use in the overall peak month for at least two of the three years in question.
It’s notable that the resources whose use endures into the summer include some of our most specialized, and to some degree intimidating, resources such as The Journal Citation Reports, Project Euclid journals in mathematics, and digital Sanborn maps.
Other specialized resources that did not quite qualify for appearance in Table 2 but for which use is essentially flat throughout the year include the Web of Science, ISI Proceedings, Inspec, NTIS, and most of the FirstSearch resources that are bibliographic but without full-text. High specialization (Early English Books), the provision of data that will be meaningful only to sophisticates (Journal Citation Reports), and purely bibliographic content about library holdings (RLG, OCLC) characterize the few databases for whose use the academic calendar appears to be largely irrelevant.
Experienced methodologists will recognize the risk of committing the ecological fallacy in making inferences about the behavior of individuals based on group differences (in this case, the population of active Virginia Tech users in the summer as opposed to that in the spring). However, political scientists, marketers, and others do routinely draw fairly confident, if hedged, conclusions about individuals from group data, and it would be very difficult to build an argument for the differential patterns we see in the use of these library resources that did not rest on the well-known distinctions in purpose, niche, and intended audience that differentiate many of the online resources that libraries offer. No academic librarian knowledgeable about the offerings of his or her own institution would find it difficult to identify local resources that vary widely along these same dimensions.
So what?
It’s natural to ask whether these data suggest any changes in library practice. One policy implication is that although it makes sense to reduce reference staffing during slow times, there are always people working with our more complex and specialized resources (perhaps even faculty thinking “When the students leave, I can get some real work done on my research?”). Some of our most dedicated users do much of their work when we tend to be lightly staffed, so we should at least make sure that online help is available all the time.
Individual libraries may profit from looking at these ratios in their own cases, and perhaps in making comparisons across institutions. If, for example, a library were to find that a resource it acquired to support general undergraduate instruction is not seeing highly seasonal use, it may indicate that the resource has not been sufficiently promoted to its intended clientele.
The extreme degree of these differences is a useful reminder of just how differentiated our user populations are. We offer a wide range of resources to all, but they are used in highly different ways and to highly different degrees. The data underscore the point that each academic library has user populations, not a user population. And it’s not just the parking situation that changes as the year progresses.
Article Views (Last 12 Months)
Contact ACRL for article usage statistics from 2010-April 2017.
Article Views (By Year/Month)
2023 |
January: 3 |
February: 2 |
March: 1 |
April: 3 |
May: 0 |
June: 1 |
July: 1 |
August: 1 |
September: 1 |
2022 |
January: 0 |
February: 1 |
March: 1 |
April: 1 |
May: 1 |
June: 2 |
July: 2 |
August: 2 |
September: 4 |
October: 1 |
November: 1 |
December: 1 |
2021 |
January: 0 |
February: 6 |
March: 1 |
April: 5 |
May: 0 |
June: 3 |
July: 0 |
August: 0 |
September: 1 |
October: 2 |
November: 6 |
December: 3 |
2020 |
January: 5 |
February: 1 |
March: 0 |
April: 2 |
May: 3 |
June: 1 |
July: 3 |
August: 2 |
September: 6 |
October: 1 |
November: 0 |
December: 4 |
2019 |
January: 6 |
February: 4 |
March: 3 |
April: 2 |
May: 2 |
June: 7 |
July: 2 |
August: 4 |
September: 1 |
October: 4 |
November: 1 |
December: 2 |
2018 |
January: 3 |
February: 5 |
March: 6 |
April: 5 |
May: 2 |
June: 3 |
July: 2 |
August: 2 |
September: 6 |
October: 0 |
November: 4 |
December: 3 |
2017 |
April: 0 |
May: 16 |
June: 5 |
July: 2 |
August: 4 |
September: 5 |
October: 1 |
November: 3 |
December: 2 |