1
|
Identifying benchmark units for research management and evaluation. Scientometrics 2022. [DOI: 10.1007/s11192-022-04413-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
AbstractWhile normalized bibliometric indicators are expected to resolve the subject-field differences between organizations in research evaluations, the identification of reference organizations working on similar research topics is still of importance. Research organizations, policymakers and research funders tend to use benchmark units as points of comparison for a certain research unit in order to understand and monitor its development and performance. In addition, benchmark organizations can also be used to pinpoint potential collaboration partners or competitors. Therefore, methods for identifying benchmark research units are of practical significance. Even so, few studies have further explored this problem. This study aims to propose a bibliometric approach for the identification of benchmark units. We define an appropriate benchmark as a well-connected research environment, in which researchers investigate similar topics and publish a similar number of publications compared to a given research organization during the same period. Four essential attributes for the evaluation of benchmarks are research topics, output, connectedness, and scientific impact. We apply this strategy to two research organizations in Sweden and examine the effectiveness of the proposed method. Identified benchmark units are evaluated by examining the research similarity and the robustness of various measures of connectivity.
Collapse
|
2
|
H. Bi H. Benchmarking the international compulsory education performance of 65 countries and economies. BENCHMARKING-AN INTERNATIONAL JOURNAL 2018. [DOI: 10.1108/bij-09-2016-0144] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Purpose
The Program for International Student Assessment (PISA) measured 15-year-olds’ performance in mathematics, reading, and science. The purpose of this paper is to use the assessment results of PISA 2006, 2009, and 2012 to benchmark the compulsory education performance of 65 countries and economies with emphasis on two benchmarking steps: identifying benchmarks and determining performance gaps.
Design/methodology/approach
The authors use a multi-criterion and multi-period performance categorization method to identify a group of best performers as benchmarks. Then, the authors use two-sample t-tests to detect against benchmarks whether each country or economy has significant performance gaps on individual performance measures.
Findings
Based on the mean scores of three assessment subjects in PISA 2006, 2009, and 2012, six best performers (Top-6) are identified from 65 participating countries and economies. In comparison with Top-6’s weighted averages, performance gaps are found for most countries and economies on the mean score of each subject, the percentage of top-performing students in all three subjects, and the percentage of lowest-performing students in each subject.
Originality/value
For compulsory education systems around the world, this paper provides an original categorization of performance based on the results of three PISA cycles, and provides new insights for countries and economies to prioritize improvement efforts to increase average performance, pursue excellence, and tackle low performance. For benchmarking applications involving multi-criterion and multi-period data, this paper presents a novel method of using statistical control charts to identify benchmarks and then using two-sample t-tests to determine performance gaps on individual performance measures.
Collapse
|