It’s that time of year when specialised global university rankings are being revealed. Last week Times Higher Education (THE) released their reputational rankings, and Quacquarelli Symonds (QS) just publicised their world rankings by subject (academic disciplines). U-Multirank also released their ‘readymade’ rankings last month and will soon divulge their overall 2015 rankings. For each ranking there are differences in where a university may place, but why is this? It comes down to three main things: the ‘who’, the ‘what’ and the ‘why’ of global university rankings.
Simply put, the ‘who’ is those universities participating in a particular ranking. It is important to recognise that not all universities participate in every ranking; in many cases, universities only participate in a ranking that several other universities within their country also participate in. So when viewing a particular ranking there is a need to identify who actually participated and provided information. A good example of this is with U-Multirank and the League of European Research Universities (LERU), representing many of Europe’s most prestigious research-intensive universities, who have declined to take part. While LERU members do show up in the rankings, they were ranked using data that U-Multirank obtained from publicly available sources such as publications, citations, and patents. So any specialised ranking, like U-Multirank’s ‘readymade’ international orientation, only ranks universities that directly supplied information to them (which was 237 out of the 862 universities in the U-Multirank system).
The ‘what’ refers to the criteria measured by a particular ranking. While THE, QS, and U-Multirank all rank universities on their international orientation, the criteria they’re using to actually measure and rank universities differs. Table 1 below shows how the three rankings have defined ‘what’ the international orientation of a university is, either narrowly (QS with two measures) or broadly (U-Multirank with multiple measures). They subsequently rank based on these differing perspectives.
Table 1: Measures used in international orientation ranking
Perhaps the most complicated, the ‘why’ is essentially how each ranking goes about measuring universities. To begin with, each ranking ‘ranks’ differently. Both QS and THE assign numeric values and rank universities accordingly from highest to lowest. Alternatively, U-Multirank assigns a letter ranking to individual measure (A-E with A being the highest and E being the lowest) based on where a university ranks in comparison to the mean (average) within a specific measure. It then groups universities into these letter scores based on how far they are above, near, or below the mean.
Each ranking also defines their measures differently, even though the same measure might be used by multiple rankings. For example, QS, THE and U-Multirank all use the proportion of international academic staff in their rankings. While QS defines academic staff as individuals employed longer than three months who contribute to the teaching or research (or both) at a university, THE breaks academic staff into two distinct categories, teaching and research-focused, with no distinction on time employed. U-Multirank asks for counts of academic staff but does not include research assistants. Furthermore, they give each university the opportunity to declare if their PhD students are counted as academic staff or not (essentially leaving it up to each university to decide how to report). Table 2 highlights the differences between the three rankings.
Table 2: Definitions of academic staff
Lastly, there are differences in the weight of each measure amongst the rankings, even if it is same. While QS assigns 5% of its overall ranking for both the proportion of staff and students who are international (10% of the total score), THE assigns exactly half as much for the same two measures (2.5% for each, or 5% of the total score). U-multirank, on the other hand, does not make a distinction amongst individual measures in ranking universities as they rank within each individual measure and do not create an overall score (ranking).
While global university rankings are still relatively young (the oldest being Academic Ranking of World Universities (ARWU), started in 2003), they are gaining in notoriety and are likely here to stay. Views of rankings (in general and of specific ones) vary with respect to how to deal with them and their general usefulness. But what is clear is that rankings have quickly become a central element in discussions of higher education and intensified cross-country comparisons. The ‘who, what and why’ of global university rankings allows understanding of how they work and how they differ from one another. Many universities are developing strategic plans to be ranked as a top university and developing strategic partnerships with similarly profiled (teaching and research) institutions. Knowing how global rankings work and differ from one another enables universities to understand their own performance and place in higher education globally. While using global university rankings in setting educational policy should not be advocated, they can provide a more thorough assessment allowing universities to better direct resources to meet strategic objectives.
Author: Charlie Mathies, University of Jyväskylä, Finland