Rankings: the good, the bad and the ugly

Rankings: the good, the bad and the ugly

Everybody’s talking about rankings these days. The EUA presented the finding of its second report on Global University Rankings and their Impact at its annual conference in Ghent last week. It became evident that rankings have an increasing impact on national policy making, on the attractiveness of higher education institutions as partners as well as destinations of study.

Being included or rising in the rankings is becoming an outspoken or unofficial goal of more and more higher education institutions. Concrete examples of the growing importance of rankings include the ease of being granted a skilled migrant visa in some countries for graduates from a top ranked institution and the decision of the Indian government to only allow higher education institutions to collaborate with top 500 ranked institutions. Given the increasing influence of rankings it is crucial to understand what they actually tell you.

How reliable are the current rankings?

The new EUA report on rankings brings together what has to a great extent been previously known. The indicators used in rankings are far from perfect – the staff/student ratio for example does not tell you about the quality of teaching. Thus it can be argued that rankings do not measure what they claim to measure. Most rankings suffer from a lack of transparency, as neither data nor information on data collection are always readily available. It is hence difficult to repeat the same research to verify the findings. Rankings also focus on research to a great extent, and mostly ignore the quality of teaching and learning, not to mention ignoring the third, social mission of higher education institutions. Additionally, within the realm of research, rankings also largely focus on the natural sciences, and are biased towards those institutions whose main language (hence the language of their publications) is English.

Research, research, research… and no teaching

The risks with giving rankings a key role in decision making is that they divert attention from improving education and teaching and give undue attention to certain fields. They also give power to unaccountable rankers that do not provide full information on their data collection and methodology. To seek to combat these issues and provide a more nuanced picture of higher education institutions, the European Commission will publish its own ranking, U-Multirank, of universities in early 2014. U-Multirank will have five different dimensions: teaching and learning, research, knowledge transfer, international orientation and regional engagement. U-Multirank aims to compare institutions with similar profiles rather than have all institutions try to fit the large research institution model. It was recognised at the EUA conference that universities are in fact ‘multiversities’ rather than universities. How useful and valid information U-Multirank will supply remains to be seen, however. Concerns have already been raised of the Commission making use of indicators simply due to their inclusion in previous rankings as well as partially relying on less trustworthy self-reported data.

Demand more transparency and use complementary data

Policy and decision makers should be critical of the story told by rankings and be aware of the fact that they only reveal part of the picture. It is important to demand greater transparency and dialogue with rankers and the higher education community at large in addition to seeking to find other complementary data such as student satisfaction and employment after graduation to make use of in decision making and daily work. In the light of the need for greater transparency and dialogue it was surprising that no ranker or Commission representative was present at the ranking presentation at the EUA conference to debate the issues at hand.