20 Feb 2024
by Phil Baty

8 aspects of responsible rankings

1000x667_Blog_8 aspects of responsible rankings.png

As someone who has spent most of their career working for an organisation that ranks universities, it may surprise some to hear me say clearly: university rankings are inherently crude and there can be no single, definitive or correct ranking of universities.

Ranking tables are ultimately based on the subjective decision-making of their compilers. The choice of which metrics to employ and what weight to give them has a significant impact on universities’ scores and ranking positions.

Rankings have always faced criticism but the decision last year by Utrecht University to withdraw its voluntary participation in Times Higher Education’s (THE) World University Rankings (in which all ranked universities volunteer to participate) has heightened the debate. Utrecht’s Rector Magnificus Henk Kummeling said: “It is almost impossible to capture the quality of an entire university with all the different courses and disciplines in one number.” I agree. Rankings will never capture all of the wonderful things universities do. They are just one tool to help better understand a university. But they can be an extremely helpful tool, which I’ve laid out in a previous blog contribution.

A charter for responsible rankings

To support constructive, open debate about the uses – and misuses – of rankings, and their role, I am presenting what may be called a ‘Charter for Responsible Rankings’ which I think all organisations that rank universities should subscribe to.

First, rankers must acknowledge there is no one single model of excellence in higher education and diverse models of distinction must be embraced. Traditional global rankings, which tend to focus on the research mission of globally competitive universities, are not suitable for institutions with different missions and contexts. Having varied metrics and ranking systems, catering to the unique strengths of each university, are vital.

Second, addressing inherent inequalities in the world is critical. Traditional global research rankings tend to favour institutions from the Global North. Efforts to mitigate biases, such as purchasing power parity calculations, normalisation of indicators and ensuring a representative academic survey, play a role in fostering a fairer evaluation. But it is also important that ranking organisations develop metrics and indicators that do not favour institutional wealth or depend on English-language knowledge systems. They must seek to capture excellence in a range of contexts, for example in driving economic growth and promoting social mobility or in making a regional or national impact.

Third, responsible rankers must be clear: while an overall, composite ranking score and position can provide a readily accessible and helpful overview of an institution’s or a nation’s broad strengths, the headline composite score will always be inherently crude. Rankings must be published in a way that allow users to break down the composite scores and to personalise and reorder the rankings based on their preferences, across the range of different performance areas.

Rankings will never capture all of the wonderful things universities do. They are just one tool to help better understand a university.

Fourth, rankers must be upfront and transparent about the methodology used in any ranking. An explicit presentation of the methodology helps users understand what is, and what is not, being assessed, and why.

Fifth, robust data collection, validation and quality assurance are crucial for meaningful comparisons. Responsible rankers collaborate directly with all ranked universities through the voluntary submission and sign-off of institutional data. To ensure fair and meaningful comparisons, international data definitions and clear submission guidance must be provided, backed by robust quality assurance processes.

Sixth, acknowledging statistical limitations is important. The further down a rankings list one goes, the more tightly packed the scores become, reducing the score differentials between each ranking position. Using banded groups can help avoid the risk of offering false precision or misrepresenting the relative difference between universities which achieve very similar overall scores.

Seventh, openness to external scrutiny and challenge is vital. The use of external advisory boards, audits and interactive engagements with the academic community should be standard practice, as these contribute to maintaining high standards and ensuring accountability.

Finally, fostering continuous innovation is paramount in adapting to evolving priorities in higher education. Being receptive to user feedback, incorporating new statistical techniques and introducing rankings that reflect changing dynamics all demonstrate a commitment to staying relevant.

University rankings are hugely popular and many staff, university leaders and those working in higher education policy find them helpful. Working together, interested parties can make them better and, in doing so, improve the sector even further.


Create your EAIE account

Explore EAIE's new digital home and get started with creating an account for a personalised browsing experience.

Create an account