By James King, Information Architect, NIH Library (Washington DC Chapter, Government Information Division)
Best Practices for Government Libraries is a collaborative document that is put out annually on a specific topic of interest to government libraries and includes content submitted by government librarians and community leaders with an interest in government libraries. The 2011 edition includes over 70 articles and other submissions provided by more than 60 contributors including librarians in government agencies, courts, and the military, as well as from professional association leaders, and more. Best Practices is edited by Marie Kaddell, Senior Information Professional Consultant; SLA DGI Chair. If you did not write for this year’s Best Practices, Marie invites you to submit a guest post for the Government Info Pro email@example.com.
The information in this article does not necessarily reflect the opinions of the National Institutes of Health. Any mention of a product or company name is for clarification and does not constitute an endorsement by NIH or the NIH Library.
When Eugene Garfield envisioned the citation index in 1955, he wanted to improve information retrieval by showing relationships between articles based upon their citation and reference history. A potential side benefit of the index was to monitor the growth and structure of scientific knowledge, but neither the corpus of published data nor sufficient computing power was readily available to effectively do so.
This benefit is now within our grasp due primarily to the work of large-scale indexes like Thomson-Reuter’s Web of Science, Elsevier’s Scopus, and the National Library of Medicine’s (NLM) PubMed. In addition, two factors have converged to create a strong need for bibliometrics. First, scientific knowledge has continued to grow and became more specialized, making it harder for a small group of experts to effectively review research proposals without relying on an objective measure, forcing an even greater reliance on computerized methodologies. Second, at the same time that science has become solidly global and collaborative in nature, the pools of research funding around the world have been shrinking. This has increased competition for scarce funds and put additional pressure on funding organizations to show the value of their research expenditures.
A recent large-scale example of how bibliometrics affected science was the 2005 Department of Defense (DOD) Base Realignment and Closure (BRAC) review process, which required all DOD research groups to submit aggregated publication and citation counts for written articles that were used in research during a two-year period. These counts were then included in deliberations about which military bases and research labs to close, which to combine, and which to move. U.S. military libraries around the world scrambled to help their military labs respond to these critical data analyses, demonstrating how information professionals could play a role in defining and defending the value of the research organizations in which they serve.
I believe information professionals are in an ideal position to develop a set of valuable services that define and defend the organization’s value. To do this effectively, it requires an understanding of the scientific and business need of their organization, an agreement on the organization’s preferred measures of success, a clear understanding of the strengths and weaknesses of the various measures available (including the algorithms that underlie them), and a clear understanding of how the metrics are best applied.
An example of how libraries can utilize existing tools to create useful evaluative reports for stakeholders comes from the Walter Reed Army Institute of Research (WRAIR) Library. Library staff compiled the number of publications produced and the number of high-impact papers (from the top 50 percent to the top 0.01 percent) published by each WRAIR researcher, plus the number of citations of each researcher’s works. These measures were entered into WRAIR’s balanced scorecard, a strategic planning and management system that provides a framework of financial and performance-based measurements tied to the vision and strategy of the organization. By also comparing the output and average citation count of Army research publications on a discipline or topic, such as malaria vaccine or drug research, to the total output in the discipline, the library is also able to show the impact of its researchers on areas of interest to stakeholders.
Over the past several years, the Naval Research Laboratory (NRL) has been working on another approach to creating useful metrics. By identifying and capturing the metadata of all journal articles, conference proceedings, book chapters, U.S. patents, and technical reports written by NRL researchers and engineers, they automatically create a number of useful reports. Examples of these reports include a bragging list of the 25 most frequently cited NRL papers of all time, the journals in which NRL papers are most often published, and—with some analysis by a third party—the patents that have cited NRL work. This effort came out of a mandate from the NRL director of research requiring all scientific promotion candidates to submit a publication list with citation counts generated by the research library.
A number of other U.S. government agencies, such as the National Aeronautics and Space Administration (NASA), have also pursued the creation of internal databases of all agency-produced materials. In a similar vein, the National Institutes of Health (NIH) requires that all publications resulting from NIH grant funding be deposited into NLM’s PubMed Central database. Dr. Elias Zerhouni, the former director of NIH, pushed for this mandate specifically so that NIH could have a tool to measure research productivity.
In pursuing an effort like this, it is critical to know what to measure and which measures will be of value. The WRAIR example relied upon the institute’s balanced scorecard to tie metrics to the strategic plan, while the NRL tied its effort to a research mandate.
The Bernard Becker Medical Library at Washington University in St. Louis provides a great model for libraries to use to assess the impact of research. Though focused on biomedical research, it can easily be applied to any research setting. The model highlights five key areas to explore when measuring research impact:
- Research output – counting how many publications were made and tracking the various outputs;
- Knowledge transfer – determining if the research was referenced or reused, including counting the number of references to those publications;
- Clinical implementation – identifying whether the research was applied to practice (e.g., used in a patent or a medical protocol);
- Community benefit – assessing whether the research made a difference in efficiency, effectiveness, or quality of life where it was applied; and
- Policy enactment – evaluating the research’s impact on laws, policies, and regulations in the pertinent sphere of influence.
Some organizations, such as NIH, have also been fortunate enough to have the resources to work with index providers to create robust, customized views of their data. One NIH-hosted service that uses customized data is Research Portfolio Online Reporting Tools (RePORT), which is designed to support the extramural research community by providing per-year data on grants as well as disease portfolios. This service allows users to search a repository of intramural and extramural NIH-funded research projects from the past 25 years and access publications (since 1985) and patents resulting from NIH funding. Search results can include the research project number, project title, contact information for the principal investigator, name of the performing organization, fiscal year of funding, NIH administering and funding Institutes and Centers (IC), and the total fiscal year funding provided by each IC.
A second NIH-hosted service, the Electronic Scientific Portfolio Assistant (eSPA), helps the intramural community evaluate the outcomes (including outputs and impact) of NIH funding. It is primarily focused on helping review and analyze portfolios of research projects for program planning and evaluation. By combining research funding with publications, custom portfolios of research can be created to help program managers and administrators track and evaluate their research.
The NIH Library has recently engaged the RePORT and eSPA groups, as well as other groups across NIH, to encourage the addition of bibliometric measures and more researcher-focused reporting in their tools.
Dr. Garfield’s vision was to explore the relationships and networks of scientists so he turned to publications as what is still one of the richest sources of relationships through co-authorships, references, and citations. As a natural step in this evolution and personally one of the most intriguing development efforts to date in this area is a NIH-funded effort to develop a national network of scientists built upon the initial work of Cornell University. This effort, dubbed VIVO (vivoweb.org), is an open source semantic Web project being built by libraries and has the potential of changing the way researchers collaborate by enabling the discovery of research and scholarship across disciplines.
Well-placed information services and resources that specifically meet the needs of our community will continue to make the difference between success and failure, even life and death. However, as distribution costs in the digital world approach zero, we must be willing to rethink the traditional view of library as a place and the traditional services that have been offered. Will today’s information professionals be brave enough to critically evaluate the current slate of services to reduce what is no longer of value in order to free time for new services like the ones described? I believe that exploring new roles like the one described in this article has the potential of opening new doors in the organization and applying our expertise in new ways. If we as a profession are to continue to be relevant in this era, we need to be willing to take risks.
Note: The author wishes to thank Gali Halevi, account development manager for Elsevier, who provided tremendous support in the creation of this program and in the writing of this article.
James King is an information architect at the National Institutes of Health (NIH) in Bethesda, Maryland, working for the NIH Library in the Office of Research Services. He is the immediate past president of SLA’s Washington, D.C., Chapter and now serves as the chapter’s Webmaster and as convener of the association’s Information Futurist Caucus. He recently helped Gali Halevi of Elsevier to coordinate a one-day seminar, Impact and Productivity Measurements in a Changing Research Environment, at which speakers shared their perspectives on various research metrics. The presentations from the seminar, which was hosted by Elsevier, are available free online at http://rainingdesk.elsevier.com/bibliometrics2010?utm_source=ECU001&utm_campaign=&utm_content=&utm_medium=email&bid=PJFG62F:VLGVS1F.