What are the implications of the HEFCEmetrics review for the next REF? It is easy to forget that the REF is already all about metrics of research performance.
The metrics review has reported and made very clear recommendations regarding the next Research Excellence Framework (REF). The review urges a cautious approach, suggests some ways in which further quantitative data can be used, and supports a continued place for peer review at the heart of the exercise.
These recommendations are important as we in HEFCE, and the other HE Funding Bodies, consider the future of the exercise. Notwithstanding the cost of the exercise, the evidence does not support the notion that using metrics is a ‘silver bullet’.
We are currently discussing the future REF informally across the sector, in advance of publishing a formal consultation in the autumn. The review is an important part of the evidence picture we have assembled, and now we need everyone to contribute to the debate, and offer their ideas for the future. And it is clear that some of the recommendations of the review concerning future REF will need further work, especially the suggestion that we should consider some standardisation of the way quantitative data are used in impact case studies.
The review has other implications for the REF, though. While we think about the future, is is easy to forget that the REF is already all about metrics of research performance. While there is only limited use of quantitative data as an input to the exercise, the outputs of the exercise, the quality profiles, are themselves metrics of research performance. The exercise could be characterised as the use of expert judgement to develop a quantitative assessment of performance.
Like any metrics, and as recommended by the review, we need to take great care in how we use and interpret the results of the REF. The Funding Bodies only publish the results as profiles, that capture the full nuance of the assessment. There are many ways to ‘collapse’ the profiles into a single number – the Grade Point Average, calculations of research power, and even approaches that take into account the proportion of eligible staff that were submitted. But all of these attempts to create a single number description of performance inevitably simplify. The same number can represent vastly different profiles, and so different performance.
The three elements of the assessment – outputs, impact and environment – also need to be considered separately to get an even more nuanced view of performance. Overall profiles can be made up in different ways from the three elements, and the balance between different elements reveals the particular strengths of institutions and units within them.
All of this means that using the REF results to make simple ‘X is better than Y’ comparisons is not necessarily responsible (to use the language of the metrics review). And if you want to use the results to separate departments or institutions into groups based on performance great care is needed. It is essential to first determine the purpose of the analysis, and then consider how best to use the data from the profiles to address that purpose.
As with other metrics, the power of numbers to inform and mislead is great in equal measure.
This post was originally published on the LSE Impact Blog.