ACRL

College & Research Libraries News

Money, sex, and reputation in doctoral programs in library science

By Keith Swigger

Professor, School of Library and Information Studies. Texas Womans University

The correlates of rankings may not surprise you.

In the last 20 years, various attempts have been made to rank library schools in terms of quality. There are good reasons for doing so. Students need information about the quality of the schools they might select. The deans and faculty of each school need to know how they are doing, and, more importantly, what they might do to improve the quality of their programs. The recent movement for accountability in education has led legislatures and state agencies to scrutinize degree programs in general, but particularly those that are small and costly. Universities, faced with restrictions im- posed by limited resources, are engaging in internal planning and review processes to help them choose academic directions. Several library science pro- grams, including programs at the University of Chicago and Columbia University that were con- sidered prestigious by librarians, have been closed after institutional reviews.

In librarianship, more attention has been paid to reputation of programs than to attempts to define quality in library education, as if we could not define quality but we know it when we see it. The purpose of the present study is to attempt to relate reputation rankings of doctoral programs in library science to objective quantitative variables. If reputation were accepted as a valid indicator of quality, any school that wished to improve the quality of its program would need to know the objective correlates of a good reputation. This paper does not address the quality of master’s degree programs, although many of the variables discussed below that relate to doctoral programs also relate to master’s programs, and the presence of a doctoral program is sometimes given as evidence of quality of a master’s program.

Reputation and quality

The most well-known efforts to rank schools are studies of perceptions of quality. Reviewing these studies, Danton observed that there are quantitative measures of libraiy school quality but argued that “the composite perception of a sufficient number of authorities will result in a fairly reliable ranking of relative quality.”1 Most of the perception studies involved questionnaires which asked library school faculty and/or library directors for their subjective assessment of the schools.2 Bookstein and Biggs cast doubt on the validity of those studies by showing that respondents were unable to explain why they ranked schools as they did, confessed to ignorance of the nature of most programs, and were not confident of their assessments,3 Biggs and Bookstein conducted a survey of library science faculty to determine the criteria important in assessing the quality of a program. The criteria cited more often by respondents at doctoral granting schools were: (1) rich library resources; (2) adequate financial support; (3) integration of library science and information science; (4) faculty research and publishing; (5) high intellectual quality of graduates; (6) adequate physical facilities; (7) adjacency to a strong employment market; (8) professional competence of graduates. Biggs and Bookstein did not attempt to link any of these attributes with the reputations of particular schools, so whether these criteria are the ones that professors actually use in ranking schools remains unknown.

Reputation rankings have also been criticized by Webster, who makes the following objections: ratings are subject to the halo effect, in which the overall reputation of an institution affects the ranking of individual departments; ratings lag several years behind reality, so the reputation of a school may be based on events, persons, or activities that are no longer a significant part of a program; scholars from the nation’s leading universities serve in disproportionately large numbers as raters, and they tend to rank highly departments of the same type and with the same emphases as their own universities have.4 Herubel demonstrates there is considerable overlap between the list of schools of high repute and the list of schools from which the faculties of library science programs (who rate the schools) received their degrees.5

There have been several attempts to relate quantitative variables to reputation. Hayes ranked schools according to how frequently the publications of each school’s faculty members were cited; the validity of this work, however, is limited by Hayes’s reservations about basing citation and publication rates on the data he used. “Citation counting,” he observed, “is a rather tenuous basis for evaluating faculty and schools.”6 After comparing “top 10” to other Ph.D.-granting schools, using Blau and Marqulies’ 1974 rankings, Hayes wrote, “This author’s qualitative impression is that there is little difference between these two groups of schools with respect to publication and citation. One must conclude that reputation is based on aspects qualitatively different from just rates of publication and citation” (164-65).

Wang and Layne investigated the relationship between reputation and number of graduates, using White’s 1981 ranking of schools, but found no highly significant correlation.' Garland and Rike, using White’s 1981 rankings, did find a significant correlation between whether the faculty of schools published and the prestige of the library science program.8 However, a recent study shows that faculty at schools with higher reputation rankings do not publish more articles based on funded research.9 Similarly, Wallace compared publication rates of faculty at programs housed in research universities to rates of faculty at other institutions and found no significant differences.10

Methodology

The reputation rankings used in this study were those reported by White in 1987, based on data he collected in 1986. White reported prestige rank numbers for only 15 of the 24 schools then offering doctoral degrees in library science. Those ranks were used where appropriate, but on the basis of those data schools were also divided into two groups, the “top 10” and all the others. Quantitative data were drawn from the 1988 Library and Information Science Education Statistical Report‚ hereinafter referred to as the ALISE Report.11 The 1988 edition primarily reports numbers for the 1986- 1987 academic year, data which are contemporaneous with White's survey. The ALISE Report is an annual compilation of statistical information about graduate library and information science education programs whose schools are members of ALISE. The data are self-reported, and in this study the data are assumed to be honest and accurate; schools were not contacted to check the accuracy of data, which may be a limitation of the study.

The ALISE Report presents data in five sections, related to faculty, students, curriculum, income and expenditures, and continuing education. That division has been followed here. The criterion for selecting variables from the report was that data on individual schools be presented. It was possible to identify a variable, "Faculty salary increase,” because data were reported by school, but it was not possible to analyze the variable “Faculty salary” because the ALISE Report presents the data only by academic rank, not by institution.

The ALISE Report presents data for 23 schools offering the doctoral degree. Data is missing for some variables for some schools. This study considers 69 variables, listed in Table 1, including 13 variables on gender and ethnicity proportions, which were computed from data in the report. For some variables in the ALISE Report, data do not distinguish between doctoral, master’s, and other programs. In reporting numbers of courses and income and expenditures, for example, the data relating to all degree and certificate programs are lumped together. It is difficult to study some variables on separate degree programs because they are not budgeted separately, and only a few schools actually offer separate courses for doctoral students and others.

TABLE 1:QUANTITATIVE VARIABLES STUDIED

Faculty variables: Total grants and contracts
Number of full-time faculty Total funds, all sources
Number of part-time faculty Total expenditure for salaries
Average salary improvement 1986-87 Total expenditure for teaching
Number of unfilled, funded full-time faculty positions Total expenditure for research
Number of faculty receiving travel funds Total expenditure for student aid
Average amount of travel funds per faculty member Total expenditure for library
Total funding for faculty travel Total expenditure
Number of faculty sabbaticals Continuing education variable:
Student enrollment variables: Total expenditure for continuing education
Full-time doctoral enrollment 1986-87 Student gender and ethnicity variables (headcounts):
Part-time doctoral enrollment 1986-87 Male American Indian
Total doctoral enrollment 1986-87 Male Asian Pacific Islander
Enrollment FTEs 1986-87 Male Black
Degrees awarded to men 1986-87 Male Hispanic
Degrees awarded to women 1986-87 Male White
Total degrees awarded 1986-87 Number of males
Foreign students Female American Indian
Scholarship aid amount Female Asian Pacific Islander
Number of scholarships Female Black
Scholarships to men Female Hispanic
Scholarships to women Female White
Assistantships to men Number of females
Assistantships to women Total student FTE
Assistantships: $ amount to men Computed gender/ethnicity variables:
Assistantships: $ amount to women Percent males, American Indian
Tuition, doctoral, full-time Percent males, Asian Pacific Islander
Tuition, out of state Percent males, Black
Tuition per credit, in state Percent males, Hispanic
Tuition per credit, out of state Percent males, White
Curriculum variables: Percent males
Weeks in academic calendar Percent females, American Indian
Number of courses offered Percent females, Asian Pacific Islander
Number of courses taught Percent females, Black
Percent of courses taught Percent females, Hispanic
Income and expenditure variables: Percent females, White
Financial support from institution Percent females

TABLE2: VARIABLES WITH SIGNIFICANTLY DIFFERENT MEANS FOR “TOP 10” AND "OTHER SCHOOLS,” ADMINISTRATOR RATINGS

Variable Top 10 Schools Mean/S.D. Other Schools Mean/S.D. P
Total expenditure $1,709,955/863,577 1,013,494/442,698 .05
Part-time faculty 12.2/9.1 5.2/4.0 .05
Male white students 9.8/6.0 2.9/1.2 .01
Male students 12.1/8.5 3.4/1.6 .01

In White’s 1986 study, rankings of schools by library directors and by library science faculty are reported separately. Schools were grouped according to whether they were in the "top 10,” and t-tests were run to determine whether the mean values of all the variables for the two groups differed significantly. Following the t-tests, a rank correlation study was conducted to determine whether the rankings of schools on each variable correlated with the rankings of schools as given, first, by administrators and, second, by library science faculty. All analyses used the SPSSX computer program running on a VAX 8700.

Results

Means for variables, administrators:There are significant differences for the t-test (p <0.05) between the “top 10” doctoral programs and the other doctoral programs for only four variables when the rankings given by library administrators are used (see Table 2). There are significant differences in that the “top 10” have more part-time faculty, higher total expenditures, more males, and more white males. New variables were calculated to test for the importance of males as a proportion of all students. When the t-test was run for the variables “white males as a fraction of all males” and “males as a fraction of all students,” there was no significant difference. The numbers of males and ofwhite males, not their proportions to the rest of the population, are significantly higher on the average at the “top 10” schools.

Means for variables, faculty:As ranked by library science faculty, the only significant difference (p <0.05) between the “top 10” and the other doctoral programs is "Total expenditure for continuing education,” which is higher at the “top 10” schools (see Table 3).

Rank correlations:Table 4 shows the variables for which there was a significant correlation between the rank of each school as given by administrators or faculty and the ranking according to the school’s reported value for the variable. These results support the results of the t-test study; administrators’ ranking of schools is negatively correlated with the extent to which the schools provide assistantships to women. Administrators’ high rankings correlate positively with high out-of-state tuition, number of sabbaticals awarded, and average salary increases. High faculty rankings are associated with number of part-time faculty, sabbaticals, and cost of out-of-state tuition.

Conclusion

A statistical difference doesn’t tell us about cause and effect. Do male students go to the “top 10” programs because they are of high quality, or do library administrators (mostly male) rate programs higher if they have larger numbers of male students? Do higher quality programs get more financial support because of their quality, or does the visibility that goes with having financial support build a school’s reputation? The scale of a continuing education program, the cost of tuition, and the number of part-time faculty are related to income; schools with larger incomes hire more part-time faculty, who often teach in continuing education.

TABLE 3:VARIABLES WITH SIGNIFICANTLY DIFFERENT MEANS FOR “TOP 10” AND “OTHER SCHOOLS,” FACULTY RATINGS

Variable Top 10 Schools Mean/S. D. Other Schools Mean/S.D. p
Total expenditure for continuing education $162,707/107,726 32,437/32,779 .01

TABLE 4:VARIABLES CORRELATED WITH RANKINGS OF SCHOOLS“

Variable Correlation with faculty rankings Correlation with administrator rankings
Expenditure for Continuing Education -.5063 (sig-013) -.4414 (sig=,029)
Total expenditures -.2089 (sig-188) -.3974 (sig=.041)
Number of part-time faculty -.4549 (sig=.022) -.3136 (sig=.089)
Average salary increase -.2977 (sig=.095) -.4046 (sig=,034)
Number of sabbaticals -.5367 (sig=.020) -.4742 (sig=.037)
Number of assistantships to women .1286 (sig=.318) .4400 (sig-044)
$ Amount of assistantships to women .0529 (sig=.423) .6060 (sig=.006)
Out-of-state tuition -.4466 (sig=.048) -.1878 (sig=,251)
Out-of-state tuition, per credit -.7360 (sig=.001) -.8185 (sig=.000)

*Variables that have strongest association with high rankings have negative values in this case, because the highest ranked schools have the smallest numbers—i.e., the top-ranked school is ranked 1, the second 2, etc.

This study shows that a higher reputation is related to the presence of males and higher income and expenditures. Money and sex matter in doctoral education in library science, just as they do in most other aspects of American life. The finding is somewhat depressing, given the numerical predominance of females in librarianship. But it is predictable, given that females in the profession are paid less and do not hold a share of management positions proportionate to their numbers. (White did not report the gender breakdown of respondents to his survey.)

An important finding is that most of the variables studied are not associated with reputation; however, some of these are the variables that university administrators attend to most. Such variables as enrollment size, degrees awarded, numbers of courses listed and numbers actually taught, and ethnic mix of student population apparently are not related to reputation. This finding suggests that those librarians and faculty who did the ranking may be out of touch with the variables that matter to those who do program reviews, a possibility reinforced by the fact that programs considered prestigious by librarians often have not been able to persuade those outside the library community of their quality.

Even though this study found only a few variables in the ALISE Report that are associated with reputation rankings, it is quite likely there are objective correlates of quality and of reputation. While the reputation studies may not be serving us well, still more research on what constitutes quality is in order. Other studies are certainly needed, particularly if library education programs are to convince reviewers external to the profession of the merit of library science. A study similar to this one, but focusing on master’s programs, is in progress. ALISE ought to review its statistical report to determine whether all the appropriate data are being collected. In particular, we need studies of graduates. Assessment of the quality of a program ought to include accounts of the competences, successes, and failures of those whom the program attempted to educate. We also need studies of the quality of the scholarship contributed by faculty. The discussion of what constitutes quality in library education is by no means complete.

Authors note: Dr. Karen Ruddy provided valuable assistance in the preparation and analysis of data for this study.

Mentors and protégés needed for pilot

The ACRL Research Committee is seeking volunteers to serve as mentors for beginning researchers as well as seeking to identify beginning researchers who wish to be mentored. As part of a pilot project, the mentoring process will be conducted by means of BITNET electronic conferencing. A program will be held at the Atlanta conference on Sunday, June 30 (2:00-5:30 p.m.) to introduce both the beginning researchers and the mentors to BITNET conferencing and to each other.

Six groups will be formed around research areas: bibliographic control, understanding the user, collection management, scholarly communication, expert systems, and library effectiveness. The pilot project will include 60 beginning researchers and 12-18 mentors, divided among the six research areas listed above. It will last a year, from the Atlanta program until the ALA annual conference in San Francisco. At the San Francisco meeting, the project will be evaluated. A continuing mentoring program will be organized at that time if the evaluation is positive.

To participate in the pilot project as either a mentor or a protégé, you must have access to BIT- NET and be willing to attend the Atlanta program in order to obtain the necessary password and conferencing manual which will be distributed at that time.

If you are interested in serving as a mentor, please send a letter containing information about your research interest and background to: Vicki L. Gregory, School of Library and Information Science, HMS 301, University of South Florida, 4202 E. Fowler Avenue, Tampa, FL 33620. Please respond by April 2, 1991, at the latest.

If you are a beginning researcher or want to do research in an area which is new to you, and you wish to be mentored, please send your name, address, and daytime phone number by May 1,1991, to: Michael Sullivan, Head, Physical Science and Technology Libraries, 8251 Boelter Hall, UCLA, Los Angeles, CA 90024. Indicate your first and second choices among the following areas of interest in which you might be mentored: bibliographic control, collection management, expert systems, library effectiveness, scholarly communication, or understanding the user. Please indicate two areas of interest.

Assignments will be based upon the order in which complete information is received. By late May or early June, both mentors and protégés will be notified of their group assignment and of the names of the other members of the group.

Notes

  1. J. Periam Danton, "Notes on the Evaluation of Library Schools "Journal of Education for Library and Information Science 24 (Fall 1983): 106-16.
  2. Peter M. Blau and Rebecca Zames Marqulies, “The Reputations of American Professional Schools,” Change 6 (Winter 1974/75):42-47; Ray L. Carpenter and Patricia A. Carpenter, "The Doctorate in Librarianship and an Assessment of Graduate Library Education,” Journal of Education for Librarianship 11 (Summer 1970):3-45; Herbert S. White, “Perceptions by Educators and Administrators of the Rankings of Library School Programs,” College ir Research Libraries 42 (May 1981):191-202; Herbert S. White, “Perceptions by Educators and Administrators of the Rankings of Library School Programs: An Update and Analysis,” Library Quarterly 57 (ĭuly 1987):262-68.
  3. Abraham Bookstein and Mary Biggs, “Rating Higher Education Programs: The Case of the 1986 White Survey,” Library Quarterly 57 (October 1987):351-99.
  4. David S. Webster, “Methods of Assessing Quality,” Change 13 (October 1981):20-24.
  5. Jean-Pierre Herubel, “Elitism and Library Faculty,” College and Research Libraries News 51 (May 1990):398-401.
  6. Robert M. Hayes, “Citation Statistics as a Measure of Faculty Research Productivity,” Journal of Education for Librarianship 23 (Winter 1983):151-72.
  7. Chih Wang and Benjamin H. Layne, “Relationship Between Perception Ranking and Number of Graduates: An Analysis of the White Survey ‚" Journal of Education for Library and Information Science 28 (Fall 1987): 116-22.
  8. Kathleen Garland and Galen Rike, “Scholarly Productivity of Faculty at ALA-Accredited Programs of Library and Information Science," Journal of Education for Library and Information Science 28 (Fall 1987):87-98.
  9. Richard Hart, Timothy Carsten, Michael Lacroix, and K. Randall May, "Funded and Non- Funded Research: Characteristics of Authorship and Patterns of Collaboration in the 1986 Library and Information Science Literature,” Library and Information Science Research 12 (1990):71-86.
  10. Danny P. Wallace, “The Most Productive Faculty,” Library Journal 115 (May 1, 1990): 61-63.
  11. Association for Library and Information Science Education, Library and Information Science Education Statistical Report. ALISE: State College, PA, 1988.
Copyright © American Library Association

Article Views (By Year/Month)

2026
January: 24
2025
January: 3
February: 4
March: 9
April: 11
May: 6
June: 21
July: 17
August: 13
September: 37
October: 32
November: 56
December: 62
2024
January: 2
February: 4
March: 1
April: 7
May: 5
June: 7
July: 2
August: 2
September: 3
October: 1
November: 1
December: 4
2023
January: 0
February: 4
March: 2
April: 3
May: 3
June: 1
July: 1
August: 0
September: 2
October: 1
November: 3
December: 2
2022
January: 2
February: 1
March: 1
April: 2
May: 5
June: 4
July: 2
August: 0
September: 2
October: 2
November: 0
December: 6
2021
January: 4
February: 1
March: 7
April: 4
May: 7
June: 3
July: 5
August: 3
September: 1
October: 3
November: 3
December: 2
2020
January: 9
February: 5
March: 7
April: 1
May: 9
June: 9
July: 4
August: 0
September: 4
October: 7
November: 3
December: 2
2019
January: 0
February: 0
March: 0
April: 0
May: 0
June: 0
July: 0
August: 14
September: 16
October: 10
November: 7
December: 13