Validating Research Performance Metrics Against Peer Rankings


Harnad, Stevan (2008) Validating Research Performance Metrics Against Peer Rankings. Ethics in Science and Environmental Politics, 8, (11)

Download

[img] HTML - Accepted Version
Download (48Kb)
[img] PDF - Accepted Version
Download (487Kb)
[img] Other (RTF) - Accepted Version
Download (4Mb)

Description/Abstract

A rich and diverse set of potential bibliometric and scientometric predictors of research performance quality and importance are emerging today, from the classic metrics (publication counts, journal impact factors and individual article/author citation counts) to promising new online metrics such as download counts, hub/authority scores and growth/decay chronometrics. In and of themselves, however, metrics are circular: They need to be jointly tested and validated against what it is that they purport to measure and predict, with each metric weighted according to its contribution to their joint predictive power. The natural criterion against which to validate metrics is expert evaluation by peers, and a unique opportunity to do this is offered by the 2008 UK Research Assessment Exercise, in which a full spectrum of metrics can be jointly tested, field by field, against peer rankings.

Item Type: Article
Related URLs:
Keywords: bibliometrics, citations, open access, reasearch assessment
Divisions: Faculty of Physical Sciences and Engineering > Electronics and Computer Science > Web & Internet Science
ePrint ID: 265619
Date Deposited: 27 Apr 2008 17:44
Last Modified: 27 Mar 2014 20:10
Further Information:Google Scholar
URI: http://eprints.soton.ac.uk/id/eprint/265619

Actions (login required)

View Item View Item

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics