FYI: Science Policy News
FYI
/
Article

Measuring Basic Research: New Academy Report

FEB 19, 1999

Since it was passed in 1993, the Government Performance and Results Act (GPRA), has held out a challenge to federal science agencies: how can the performance of basic research be evaluated? In order to promote effective management and accountability, the act requires federal agencies to identify strategic goals, develop performance plans translating those goals into annual targets, and produce annual performance reports assessing whether the targets were met. The first performance reports will be due in March 2000. Yet performers of federally-funded research are still struggling to determine what constitutes an appropriate measure of progress for basic research.

Previous efforts have been to address this issue (see FYIs #106 and #140 , 1996). At a February 17 press briefing, the National Academies of Science and Engineering, and the Institute of Medicine, released a new report by their Committee on Science, Engineering, and Public Policy (COSEPUP) that contributes to the dialogue. The report, “Evaluating Federal Research Programs: Research and the Government Performance and Results Act,” recommends an evaluation method “to ensure that the basic research programs that the nation funds generate the kinds of knowledge that have given us great practical benefits in the past.” House Science Committee Chairman James Sensenbrenner (R- WI) is already on record saying “I fully support the findings in this report.”

Among scientists and policy-makers, there is general agreement that while the progress of applied research toward desired outcomes can be measured at regular intervals, such an assessment would be meaningless for basic research. Because the time span from performance of the research to any practical result can be on the order of decades, and results are not necessarily predictable in advance, many feel there is no meaningful way to evaluate the performance of fundamental research on an annual basis. However, stated COSEPUP Chairman Phillip Griffiths, the committee does not subscribe to this “widespread myth.”

After reviewing a number of possible performance indicators, COSEPUP recommends what is essentially an enhanced, more comprehensive version of the familiar practice of peer review. The committee’s suggestion is a three-pronged method it calls “Expert Review.” In this process, bodies with appropriate expertise would evaluate the quality of a research program; its relevance to the agency’s mission; and whether it is on the leading edge of research in that field by a process of international benchmarking. According to Griffiths, the committee feels that the commonly-used practice of peer review, which primarily assesses research quality, alone is insufficient “to ensure that the most effective and appropriate kinds of research are being done at levels of international excellence.” The three indicators measured by Expert Review, the report states, “are good predictors of eventual usefulness” of the research, and can be used as effective planning and management tools by the agencies.

In its report, the committee raises concerns about the level of cross-agency coordination for areas of research that are performed by several agencies. While it finds that such pluralism is one of the major strengths of the U.S. research enterprise, it urges that each research field should have a lead agency responsible for encouraging cooperation, reducing duplication, and ensuring no important questions are overlooked. COSEPUP also advocates a higher profile in agencies’ strategic planning for development of human resources in science and engineering. Additionally, the report warns that misuse of the process or development of inappropriate performance measures could lead to “strongly negative results” for research. It encourages the science community to take a more active part in developing performance assessments, noting that “as a first step, they should become familiar with agency strategic and performance plans, which are available on the agencies’ web sites.” Excerpts from the report’s conclusions and recommendations are highlighted below:

CONCLUSION 1: Both basic and applied federal research programs “can be evaluated meaningfully on a regular basis.”
CONCLUSION 2: Federal agencies must use evaluation measures that are appropriate for, and “match the character of,” their research programs.
CONCLUSION 3: “The most effective means of evaluating federally funded research programs [both basic and applied] is expert review,” incorporating assessments of quality, relevance, and world leadership.
CONCLUSION 4: In their strategic and performance goals, "[a]gencies must pay increased attention to their human-resource requirements in terms of training and educating young scientists and engineers...”
CONCLUSION 5: Current federal “mechanisms for coordinating research programs in multiple agencies whose fields or subject matters overlap are insufficient.”
CONCLUSION 6: “The development of effective methods for evaluating and reporting performance requires the participation of the scientific and engineering community, whose members will necessarily be involved in expert review.”
RECOMMENDATION 1: Because meaningful measures of progress can be developed, “research programs should be described in strategic and performance plans and evaluated in performance reports.”
RECOMMENDATION 2: “For applied research programs, agencies should measure progress toward practical outcomes. For basic research programs, agencies should measure quality, relevance, and leadership.”
RECOMMENDATION 3: “Federal agencies should use expert review to assess the quality of research they support, the relevance of that research to their mission, and the leadership of that research.”
RECOMMENDATION 4: “Both research and mission agencies should describe in their strategic and performance plans the goal of developing and maintaining adequate human resources” in the relevant fields.
RECOMMENDATION 5: “Although GPRA is conducted agency-by-agency, a formal process should be established to identify and coordinate areas of research that are supported by multiple agencies. A lead agency should be identified for each field...”
RECOMMENDATION 6: “The science and engineering community can and should play an important role in GPRA implementation.”

“Few researchers are aware of GPRA, its objectives, and its mandates,” Griffiths warned. The committee, he said, believes “the law can become a tool of great value if working scientists understand its intention, help educate more people about their work, and lend their expertise to the development of more accurate evaluations.” The COSEPUP report, which runs 41 pages not including appendices, can be found on the National Academy web site at: http://www2.nas.edu/cosepup/ under “Publications.”

Related Topics
More from FYI
FYI
/
Article
Republicans allege NIH leaders pressured journals to downplay the lab leak theory while Democrats argue the charge is baseless and itself a form of political interference.
FYI
/
Article
The agency is trying to both control costs and keep the sample return date from slipping to 2040.
FYI
/
Article
Kevin Geiss will lead the arm of the Air Force Research Lab that focuses on fundamental research.
FYI
/
Article
An NSF-commissioned report argues for the U.S. to build a new observatory to keep up with the planned Einstein Telescope in Europe.