Why Rank?

For those that follow our Twitter account, you’ll know that we’re not great fans of the rankings out there. They’re screwed up in a number of ways.

  • Methodology: Rankings are – even when supported by huge teams of staff doing statistically complex things – inherently dodgy They pretty much never capture what they purport to (e.g. employability, international outlook, teaching quality) because the data comprises of dubious proxies that possibly relates to those things. The terms they use – excellence, quality, impact – are not unproblematic terms, but are defined and imposed in a way that is measurable. Rankings do tell us something, but not what the rankers say they do! The tables are subjectively weighted but presented as an objective truth on the relative quality of universities. Rankings very rarely acknowledge their shortcomings in any depth, rather saying that ‘yes, we’re not perfect, but our complexity and effort shows that we’re trying to do an impossible job as well as we can’.
  • Influence: In spite of the widespread acknowledgement that rankings are deeply flawed, they are excessively influential in how universities are run. VC salaries are connected to them (and other performance measures) and departments/universities are re-/oriented around doing well on metrics, relative to other universities. Rather than actually doing the thing genuinely well, what really matters is that you’re doing it better than your rivals. This engenders a heightened competitive culture, where collegiality and collaboration that make higher education better (and more fun) is sidelined; university leadership obsessively measures, compares – and tries to optimise – everything that moves in higher education.
  • Wankings: Universities, organisations which perhaps above all are dedicated to the production and verification of high quality knowledge, cherry pick rankings for any data that paints them in a favourable light. It then gets broadcast far and wide, even though we know that they don’t really tells us much about a university’s relative quality nationally, globally, or regionally. You can’t actually be ‘the best’ at anything in a meaningful way, but that’s what the rankings supposedly tell us. Except they don’t. It seems that the competitive desire (or desperation!) is so strong that they’ll use poor quality data to advertise what is (theoretically, at least) high quality activity.
  • Money: Underneath it all, many of the rankings are largely a marketing exercise for consultancy services of the ranking organisation, rather than a genuine service. Rankers avoid mentioning this. The primary purpose of the rank is to generate attention, and from that attention, cold, hard cash for help to improve their ranking position (not the quality of their teaching or research etc). It’s not a service to the community, it’s a sales pitch.

With all of this in mind, why on earth did we create a ranking?! In short, they’re a gimmick, and that’s why we’ve done it, to attract attention. Our ranking is sort of crap, full of holes, but we readily accept that, because rankings are crap. Also, there’s an element of mischief in all of this, it it’s good fun to rattle the cage and get people talking, but we were also genuinely intrigued to see what would happen if we did a ranking based on what’s important to us. We also believe that rankings can be useful if they’re used responsibly, to promote social progress – not proxies that might be connected to social progress. Metrics and rankings can give us a sense of what’s going on across university sectors, but their shortcomings have to be openly acknowledged. But if university leaders are motivated by competitive comparisons, then maybe taking what we’ve done here, and really doing it professionally could promote positive change…