University rankings create a fake perception about universities

The Quacquarelli Symonds and Times Higher Education ranking systems have become the standards for university rankings.

These rankings, however, are not always objective and the methods they use for university rankings have always been shady. Take the Quacquarelli Symonds, whereby, they use metrics such as academic reputation (accounting for 40 percent), Employer reputation (10%), Faculty/Student Ratio (20%), Citations per faculty (20%) and International faculty ratio/International student ratio which account for 5 percent each.

Metrics used for university rankings

Some of these criteria and metrics used to rank the universities are subjective. Take for instance using an institution’s reputations, which has the highest weight of any metric. A survey asking employers, graduates and the general public about which are the most reputable institutions, year after year will result in almost similar results year after year.

This is because, after a strong showing in the published university rankings, universities, and the general public react to these rankings as a show of which university is better. Therefore, the next survey that will be done will result in people giving more weight to institutions that were already highly ranked, without regard to their academic prowess and how much infrastructure and learning improvements they have made over the year.

The idea of such a survey also attempts to quantify people’s attitudes and opinions of institutions, and although research groups such as The Quacquarelli Symonds pride themselves of interviewing over 100,000 experts in these fields to come up with a good measure of these universities, one cannot separate the bad science of collecting these opinions, computing them and then trying to use them to group institutions from best to worst.

Why universities compete

However, even with these shortcomings, which many universities are aware of, university rankings are still popular in universities. This is because it reinforces the belief that the highly-ranking universities have better infrastructure, better teaching methodologies and more qualified school staff that can translate to a better academic experience for students.

Although for some institutions this may be true, it is not always the case, and when the list starts moving outside the top 100 universities in the world, the quality of education line in these rankings starts to become blurred.

For instance, institutions with superb infrastructure, with qualified teaching staff and whose students are top achievers can end up being ranked poorly by these institutions if they lack the Medicine and Engineering schools. This is regardless of the resources they dedicate to their research and how much effort they put, they will always be doomed during the rankings.

Also, the idea that universities with international students get extra points for it, is something to think about. Contribution of such a demographic in schools does not necessarily raise the standards of learning in these institutions. However, such are the metrics that are used during the university rankings.

Universities invest a lot of money each year, trying to beat the ranking system through public relations other methods that do not contribute to the quality of education. For them, a higher ranking means they are able to net more students which translates to more resources and profits. This, from a business perspective, is an amazing endeavor but has zero contribution to the education quality in these institutions.

 

Featured image by Pixabay