DORA: San Francisco Declaration on Research Assessment (May 2013)

Marta Bladek


Academic institutions increasingly rely on bibliometric measures to assess the quality of their faculty’s research output. Tenure and promotion deliberations, as well as funding decisions, often invoke the Journal Impact Factor (JIF), the h-index, citation counts, as well as a growing array of newly developed performance metrics. Similarly, funding agencies use available bibliometric measures to identify grant-worthy research proposals and ongoing projects. While the proponents of metrics insist they impartially capture the quality of research, their opponents point out that these parameters are not reliable tools for evaluating scholarly output.

In the past few years, the ongoing debate on metrics-driven research assessment has gained momentum. In particular, the JIF—the most influential metric by far—has come under fire in the scientific community. Perhaps the most vocal and cross-disciplinary critique was formulated during the December 2012 meeting of the American Society for Cell Biology (ASCB). The critique and manifesto have become known as DORA: San Francisco Declaration on Research Assessment.1 The DORA declaration calls for placing less emphasis on publication metrics and becoming more inclusive of non-article outputs.

The 82 original organization signatories for DORA included ASCB and other scientific societies from around the world. Editorial boards of well-known journals, prestigious research institutes and foundations, and providers of new metrics (Altmetric LLP and Impactstory, both promoting the use of altmetrics) lent their support, as well. As of late January 2014, DORA has more than 400 supporters among organizations, and more than 10,000 individuals signed the declaration.2 Since DORA was issued, its critique and recommendations have been discussed in scientific journals3 and blogs,4 and on academic portals such as The Chronicle of Higher Education.5 The debates around research assessment have even been brought to the general public’s attention in The Guardian6 and, most recently, The Atlantic.7

DORA’s call for new and improved research assessment tools singles out the JIF as the deeply flawed—yet disproportionally important—journal-based metric that has come to dominate decisions about hiring, promotion, and funding. Eugene Garfield, the scientist who formulated the algorithm for calculating the JIF, started to explore the idea in 1955.8 The formula was applied to determine which journals should be included in the first Science Citation Index published by the Institute for Scientific Information for the year 1961.9 Thomson Reuters, the company that has been issuing the Journal Citation Reports (an annual ranking of journals) since 1975, calculates the JIF by dividing the number of citations made in a given year to items published in a journal in the previous two years by the total numbers of articles and reviews published in the previous two years.10 The formula, then, measures how many times an article from the journal has been cited on average in a given year.

Although the impact factor was originally meant to identify influential journals only, with time, it has come to be interpreted as a measure of author and article impact, as well. A researcher’s tenure and promotion often depend on his or her publication metrics. Similarly, grant applicants are under pressure to demonstrate their scientific productivity by publishing their work in high-impact journals.

Critiquing the unwarranted reliance on the JIF as an indicator of an article’s or researcher’s importance, the proponents of alternative methods of research assessment point to further characteristics that make the JIF an inadequate evaluation tool. DORA briefly lists a few of them, but they merit a closer look.

To start, it has been established that the distribution of citations is deeply skewed, a phenomenon reflected in the 80/20 rule: just 20% of articles receive 80% of the citations.11 In other words, the JIF is not representative of the impact of individual articles; an article published in a high-impact journal shouldn’t automatically be assumed to be of great importance or quality.

Moreover, the JIF differs greatly from one field to another, a fact that makes cross-disciplinary evaluations moot.12 For example, the 2004 weighted impact factor for mathematics journals was 0.56; for molecular and cell biology it was eight times as high, 4.76.13 These differences have to do with varied citation practices across fields, discipline-dependent lag times between publication and citation, as well as the discipline-specific number of citations an average article includes.14 Furthermore, the JIF lends itself to manipulation, a weakness of which numerous journals have taken advantage. To inflate their ranking, journals may resort to practices, such as coercive self-citation, where authors are pressured to include citations to the journals in which their article is to be published.15 The release of the Journal Citation Reports for the year 2012 was accompanied by a list of 65 titles suppressed for “anomalous citation patterns resulting in a significant distortion of the Journal Impact Factor, so that the rank does not accurately reflect the journal’s citation performance in the literature.”16 The gaming of the JIF may be monitored but not prevented.

In light of the above limitations, DORA puts forward a set of recommendations for the scientific community. To decrease their reliance on journal-based metrics, DORA asks that members of the scientific community commit to reformulating their definitions of research quality. As set by academic institutions, criteria for hiring, tenure, and promotion should stress the content rather than the venue of publication. Additionally, institutions and funding agencies alike are urged to consider research outputs other than articles; if varied forms of research output were considered, the measurement of research impact would no longer be confined to publication and citation metrics. Publishers, in turn, must take action to minimize the prominence of the JIF. It should not be emphasized in journal marketing, or, if used nevertheless, the JIF should be presented as merely one of many available journal-based metrics. Moreover, articles should not be subject to reference number limits and authors should be required to cite primary research rather than reviews.

Metrics providers have a role to play, as well. They should make their data and methods transparent and available to the public. Their organizations should also be vocal about and discourage the abuse and manipulation of metrics. Institutional and organizational efforts to move away from the reliance on publication-level metrics are not sufficient, DORA argues. As members of groups involved in hiring, tenure, promotion, and funding decisions, scholars should expose the limitations of journal-based metrics and advocate alternative methods of research assessment. As candidates for tenure or promotion and as applicants for funding, researchers should represent the quality of their work through a range of metrics rather than rely on publication-level metrics alone. Furthermore, as authors, scholars should cite primary research over reviews to promote original scholarship.

In light of the above limitations, DORA’s message only gains urgency. If “scientific output is [to be] measured accurately and evaluated wisely,” the current assessment practices must be modified and supplanted by new tools that account for—rather than overlook—the complexity and variety of research outputs.17 The overdependence on the JIF and other publication metrics, DORA signatories well realize, can be effectively challenged only through a concerted effort of the entire scientific community, including researchers, institutions, and funding agencies. The petition identifies the pitfalls of an uncritical reliance on existing assessment criteria and identifies steps that should be taken to lessen it; ultimately however, a shift in research evaluation methods will only take place if the scientific community takes actions and adopts tools other than journal-based metrics.

Academic librarians are well positioned to promote DORA’s call to expand research assessment beyond the JIF. First, it is crucial that faculty and personnel committees are well informed about the caveats of bibliometrics. Accordingly, it is not enough that many libraries provide access to Thomson Reuters products, such as the Web of Science and Journal Citation Reports. To encourage a judicious use of the metrics these and other databases collect, librarians should ensure information about their strengths and weaknesses is easily available.

The University of Michigan Library offers a useful example of how such a task can be accomplished. A group of librarians put together a comprehensive and well-organized Citation Analysis Guide18 discussing the JIF and other measures in depth. My colleague Kathleen Collins and I created a similar guide for the John Jay College community.19

As DORA points out, however, being knowledgeable about the limitations of the JIF and other metrics is not enough. If assessment practices are to change, new tools need to be promoted. To that end, librarians may also endeavor to keep abreast of new developments in the field. For example, we continue to update our guide with emerging assessment trends. Accordingly, our guide invites faculty to consider altmetrics and alerts them to groundbreaking initiatives, such as Faculty Media Impact Project.20 In addition to the online guide, we have disseminated information about the varied assessment tools through a variety of venues on campus. We offered workshops in the library, at the Center for the Advancement of Teaching, and in partnership with the Office of Institutional Research. All were well attended, and the participants assured us that the information presented was useful and helpful. With these and related kinds of initiatives, academic librarians can actively contribute to the debates around research assessment and further the cause of DORA.


fn1-0750191Contact series editors Zach Coble, digital scholarship specialist at New York University, and Adrian Ho, director of digital scholarship at the University of Kentucky Libraries, at E-mail: with article ideas

Notes
1. American Society for Cell Biology, “San Francisco declaration on research assessment. ,”
[Full Text] (accessed January 31, 2014. ).
2.

Ibid.

3. Alberts, . , “Impact factor distortions.” Colin Macilwain, “Halt the Avalanche of Performance Metrics,”. Nature 500, no. 7462 ( 2013 ): 255 .
4. Fister, B. , “Library Babel Fish: End Robo-Research Assessment. ,”
[Full Text] (accessed January 31, 2014. ).
5. Basken, P. , “Researchers and Scientific Groups Make New Push against Impact Factors. ,”
[Full Text] (accessed January 31, 2014. ).
6. Schekman, R. , “How Journals like Nature, Cell and Science Are Damaging Science. ,”
[Full Text] (accessed January 31, 2014. ).
7. Warraich, HJ. , “Impact Factor and the Future of Medical Journals. ,”
[Full Text] (accessed January 31, 2014. ).
8. Garfield, E. , “The History and Meaning of the Journal Impact Factor. ,” JAMA: The Journal of the American Medical Association 295, no. 1 ( 2006 ): 90-93 –.
9.

Ibid.

10. Reuters, T. . “The Thomson Reuters Impact Factor. .”
n.d. [Full Text] (accessed January 31, 2014. ).
11. Garfield, . , “The History and Meaning of the Journal Impact Factor.” David A. Pendlebury, “The Use and Misuse of Journal Metrics and Other Citation Indicators,”. Archivum Immunologiae et Therapiae Experimentalis 57, no. 1 ( 2009 ): 1-11 –.
12. Jarwal, SD.. Brion, AM.. King, ML.. , “Measuring Research Quality Using the Journal Impact Factor, Citations and ‘Ranked Journals’: Blunt Instruments or Inspired Metrics?. ,” Journal of Higher Education Policy and Management 31, no. 4 ( 2009 ): 289-300 –.
13. Althouse, BM.. West, JD.. Bergstrom, CT.. Bergstrom, T. , “Differences in Impact Factor across Fields and over Time. ,” Journal of the American Society for Information Science and Technology 60, no. 1 ( 2009 ): 27-34 –.
14.

Ibid.

Seglen, PO.. , “Why the Impact Factor of Journals Should Not Be Used for Evaluating Research. ,” BMJ: British Medical Journal 314, no. 7079 ( 1997 ): 498 .
15. Wilhite, AW.. Fong, EA.. , “Coercive Citation in Academic Publishing. ,” Science 335, no. 6068 ( 2012 ): 542-543 –.
16. Reuters, T. , “Journal Citation Reports Notices®. .”
Last modified September 27, 2013. [Full Text] (accessed January 31, 2014. ).
17.

American Society for Cell Biology, “San Francisco declaration on research assessment.”

18. University of Michigan Library, “Citation Analysis Guide. ,”
last modified February 6, 2014, [Full Text] (accessed February 13, 2014. ).
19. Lloyd Sealy Library, “Faculty Scholarship Resources. ,”
last modified November 26 2013, [Full Text] (accessed February 13, 2014. ).
20. Center for a Public Anthropology, “Faculty Media Impact Project. ,”
[Full Text] n.d. (accessed February 13, 2014. ).
Copyright © 2014 Marta Bladek

Article Views (2018)

No data available

Contact ACRL for article usage statistics from 2010-April 2017.