Closing a Governance Blind Spot: Function creep, power asymmetry, and the limits of robustness debates on rankings

In discussions surrounding university rankings—whether in policy, administrative, or academic contexts—there is a persistent tendency to conflate two analytically distinct issues. On the one hand are rankings themselves, in all their variety of scope, design, orientation, and purpose. On the other is the role rankings have been allowed to assume across academic practice, organizations, policymaking, and governance.

To date, considerable attention has been devoted to debating the former, while far less scrutiny has been directed at the broader question of what is being delegated to rankings, and with what systemic consequences. In this presentation, I therefore focus on the latter.

“Why are rankings influential?”

There is no single answer to why rankings have come to wield so much influence. For present purposes, however, I will focus on three in particular.

Convenience

Anyone who has given this even the least of thought has come to the conclusion that convenience must play a role. Rankings are indeed influential in large part because they are easy to read and use. They are even easy to make, one could argue. Much less so because they are true—or even trusted—in any strong sense.

Rankings privilege what is easily counted, not necessarily what is socially valuable. They turn something that is highly complex into an easily communicable signal. For decision makers under time pressure, that signal is tempting.

Circulation

Rankings are also influential because they circulate widely. They travel fast and far. Because they strip away context, rankings seem to many as a tool that is widely applicable across borders.

But once we get to more specific policy or stakeholder needs, the tool turns out to be quite blunt. What circulates easily often does so at the cost of precision and usefulness.

Accountability displacement

Finally, rankings offer a seemingly rational way to displace, or shield from accountability. When rankings inform decisions, explicitly or implicitly, responsibility shifts from political judgment to the questions such as “the quality of the data,” “soundness of the method,” or the problem of “robustness.”

This move is appealing because it depoliticizes difficult choices and presents them as technical matters. However, we know that no amount of methodological refinement or data improvement in rankings can substitute for decisions about what should count as valuable and of good quality and how priorities should be set.

“Are rankings sufficiently robust?”

Asking whether rankings are sufficiently robust is, therefore, not a good starting point. Talks of robustness, methodology and data quality miss the main issues. I like to think of them as a distraction and I take this view for two main reasons.

Function creep

Even a perfectly robust ranking would still be problematic if it is used for purposes it was never designed for. When rankings quietly shift from being tools for orientation, information, or attention-grabbing to instruments of governance, what we are witnessing is function creep.

Over time, rankings can become an end in themselves: instead of adapting evaluation tools to the realities of scientific work, policymakers begin to reshape scientific practice to fit what available and popular tools—such as rankings—are able to capture. This is dangerous.

Power asymmetry

There is a great deal of talk about transparency and openness in science, but this transparency is one-directional when it comes to using rankings as evaluation tools. Universities are asked to be transparent to ranking organizations; but the ranking organizations are not transparent to universities, not even to policymakers—or the public at large—in any reciprocal sense. This is an unbalanced relationship.

There are many issues when it comes to how rankings are made and exploited, issues that we as the concerned parties—academics, policymakers, taxpayers—are not privy to. For example, we know how the same data that underpin rankings are used to generate a growing portfolio of commercial products—benchmarks, dashboards, consultancy services—that universities and governments are then lured into buying. This arrangement would be unacceptable, or at least scrutinized and regulated by the authorities in many other domains, yet it has become normalized for universities.

To be perfectly clear, this is not about accusing data and rankings companies of bad faith. It is about recognizing a structural imbalance in the governance of data and agenda setting that is of public interest. We, therefore, must not only question this imbalance, but also look for ways to rectify it.

“How can rankings be improved?”

Discussions of rankings are often oriented toward the question of how they might be improved. This, too, is ultimately unproductive, as it distracts from what is really at stake.

A more productive question for policymakers engaged in science evaluation would be, for example: How can research assessment systems make use of comparison and data without outsourcing judgment and governance to opaque infrastructures?

Here I propose three directions of thinking.

Data and infrastructure transparency as a policy principle

Any organization collecting data for assessment or comparison of public research organizations should be required to provide reciprocal transparency. Not only with regard to data, but also to the infrastructures through which data are collected, transformed, and reused. This includes auditable data-processing rules, transparency of underlying infrastructures, reproducibility of results, and clear disclosure of secondary uses of data. This must be a policy principle.

Collective action

Individual universities, but also governments, have limited leverage. Collectively—through national associations and international alliances, for example—they have significant bargaining power. Data and infrastructure governance is an area where collective standards are both feasible and overdue.

History is full of examples of universities and their governments coming together and joining forces for a good cause. Platforms such as CoARA and DORA are good examples and they are of course not the only ones. Yet more can be done to renegotiate the relationship and rebalance the playing field, to get more universities truly on board with reshaping how research assessment infrastructures are governed.

Support for public and cooperative alternatives

Last but not least, if policymakers want to reduce dependence on the companies involved in rankings, they must provide regulatory and financial support to non-commercial, interoperable data infrastructures that allow plural and contextual interpretation and ensure full transparency of data processing, instead of blindly trusting opaque analytics and their vendors.

Towards closing a governance blind spot

The problem we face is not a lack of better or more “robust” rankings, but the way reliance on them misdirects discussions on how science and its institutions are both valued, compared, and evaluated.

When rankings become structurally embedded in science policymaking or institutional research strategies, this is not a matter of imperfect tools. It is a failure of governance. And as long as judgments about what counts as valuable research are outsourced to opaque, convenience-driven instruments with well-documented unintended consequences, no amount of methodological refinement will address the underlying problem.

The way forward, therefore, lies in reasserting collective, transparent, and explicitly normative responsibility over how comparison and metrics are used—and, crucially, where their limits must be drawn.

This text is based on the presentation given at the OECD Global Science Forum workshop “Rethinking research assessments and incentives in response to new expectations and demands,” held on 27–28 January 2026. The presentation was part of a session examining the influence of institutional rankings and related benchmarking practices on research policy and was structured around a selection of questions earlier provided by the organizers.

Link to the text on Zenodo