The Companies That Decide Which Universities Are 'Best'
QS, Times Higher Education, and Shanghai Ranking are not academic institutions. They are for-profit companies that sell consulting services to the same universities they rank. Here's how the system actually works — and what it means for families making six-figure education decisions.
The Short Answer
Every year, millions of families use university rankings to make one of the most consequential financial decisions of their lives. The three most cited global rankings — QS World University Rankings, Times Higher Education (THE), and the Academic Ranking of World Universities (ARWU, known as the Shanghai Ranking) — are treated as authoritative, neutral assessments of educational quality. They are not. They are products of for-profit companies that generate revenue from the same institutions they evaluate. Two of the three sell consulting services directly to ranked universities. One has been the subject of a peer-reviewed study demonstrating that paying clients rise faster in its rankings. And the entire system relies on methodologies so opaque, so circular, and so disconnected from what actually happens in a classroom that a growing number of universities — including Harvard, Yale, and Columbia — have either boycotted them or been caught fabricating data to game them.
This article explains who runs the rankings, how they make money, what their methodologies actually measure, and why the system is structurally incapable of telling you whether a university will be good for your child.
Part I: Who Runs the Rankings
QS World University Rankings
Owner: Quacquarelli Symonds Ltd, a private, for-profit company headquartered in London. Founded in 1990 by Nunzio Quacquarelli while he was a student at the Wharton School of Business. QS has offices across Europe, Asia, and the Americas and employs over 800 staff.
Revenue model: QS generates revenue from multiple streams, all involving the universities it ranks. These include QS Stars — a paid rating system where universities pay to be evaluated on additional criteria and receive a star rating (one to five-plus stars). QS also sells consulting services, data analytics, advertising on its TopUniversities.com platform, and runs international student recruitment fairs. The company's own website describes its mission as connecting "students, institutions, governments and industry partners."
The conflict: QS simultaneously ranks universities and sells them services to improve their performance. In April 2021, Igor Chirikov, a senior researcher at UC Berkeley's Center for Studies in Higher Education, published a peer-reviewed study that matched data on 28 Russian universities' positions in the QS World Rankings between 2016 and 2021 with information on contracts those universities held with QS. His finding: universities with frequent QS-related contracts experienced significantly greater upward mobility in both overall rankings and in faculty-student ratio scores. The study was later published in Higher Education, one of the field's leading journals. QS denied that consulting relationships influence rankings.
Times Higher Education (THE)
Owner: Times Higher Education is owned by Inflexion Private Equity, a London-based private equity firm that acquired THE in 2019. THE has since been through four ownership changes in fifteen years. Under Inflexion, THE acquired Inside Higher Ed (2022) and Poets & Quants (2023), building a portfolio of higher education media properties. In May 2026, THE appointed a new CEO.
Revenue model: THE generates revenue from data licensing, advertising, events, and — critically — THE Consultancy, which provides "strategic, data-driven guidance to universities, governments and organisations globally." THE also acquired The Knowledge Partnership in 2020, a consultancy specialising in higher education strategy. THE works with over 800 institutional clients globally across its data, consultancy, and hiring services.
The conflict: THE ranks universities and sells them consulting on how to improve their ranking performance. The company's own methodology page states that it collects data directly from institutions, who "provide and sign off their institutional data." THE is a private equity portfolio company — its primary obligation is to generate returns for Inflexion's investors, not to produce neutral academic assessments.
Academic Ranking of World Universities (ARWU / Shanghai Ranking)
Owner: ShanghaiRanking Consultancy, a private company based in Shanghai. ARWU was originally created in 2003 by the Center for World-Class Universities at Shanghai Jiao Tong University. Since 2009, it has been published and copyrighted by ShanghaiRanking Consultancy, which describes itself as "a fully independent organization on higher education intelligence and not legally subordinated to any universities or government agencies."
Revenue model: ShanghaiRanking Consultancy sells data products including the ARWU Tracker and GRAS Tracker — subscription tools for universities to monitor their ranking performance.
The distinction: ARWU is the least commercially entangled of the three. It does not run a consulting business comparable to QS or THE, and its methodology relies entirely on publicly verifiable data (Nobel Prizes, publications, citations). However, it is still a private company, not an academic institution, and its methodology has its own significant biases (see Part II).
U.S. News & World Report (Domestic, Not Global — But Instructive)
While U.S. News primarily ranks American institutions, its story is the most revealing case study of what happens when rankings become a business. U.S. News is a for-profit media company that transitioned from a print newsweekly to a rankings-and-data business. Its college rankings, first published in 1983, are described by Harvard Business School as occupying "the heart of a vibrant ecosystem" — a multi-sided platform that monetises the anxiety of families and the vanity of institutions simultaneously.
Part II: What the Methodologies Actually Measure
QS: Half the Score Is Opinion
The QS World University Rankings use nine indicators. The weightings, as of the 2026 edition:
| Indicator | Weight | What It Actually Measures |
|---|---|---|
| Academic Reputation | 30% | Survey of academics asking which universities they consider best |
| Employer Reputation | 15% | Survey of employers asking which universities produce the best graduates |
| Faculty-Student Ratio | 10% | Number of academic staff per student |
| Citations per Faculty | 20% | Research citation impact |
| International Faculty Ratio | 5% | Percentage of non-domestic academic staff |
| International Student Ratio | 5% | Percentage of non-domestic students |
| International Research Network | 5% | Breadth of international research partnerships |
| Employment Outcomes | 5% | Graduate employability metrics |
| Sustainability | 5% | UN SDG-related metrics |
The problem: 45% of the total score comes from reputation surveys — people's opinions about which universities are good. These surveys are self-reinforcing: academics and employers name universities they have heard of, which are the universities that already rank highly, which makes them more visible, which means more people name them next year. This is a textbook feedback loop. A university that is genuinely excellent but not widely known will score poorly on reputation regardless of its actual teaching or research quality.
THE: 18 Indicators, Still Reputation-Heavy
The THE World University Rankings use 18 indicators across five pillars. The 2026 methodology:
| Pillar | Weight | Key Components |
|---|---|---|
| Teaching | 29.5% | Teaching reputation (15%), staff-student ratio, doctorate ratios, institutional income |
| Research Environment | 29% | Research reputation (18%), research income, research productivity |
| Research Quality | 30% | Citation impact (15%), research strength, excellence, influence |
| International Outlook | 7.5% | International staff, students, collaboration |
| Industry | 4% | Industry income, patents |
The problem: 33% of the total score comes from reputation surveys (15% teaching reputation + 18% research reputation). THE runs its own Academic Reputation Survey annually, collecting over 108,000 responses. But the same circularity applies: respondents name institutions they already perceive as prestigious. THE's own methodology page acknowledges that it weights responses "to fully reflect the global distribution of scholars" — but weighting a biased sample does not remove the bias.
Additionally, the research-heavy weighting (59.5% across Research Environment and Research Quality) means THE effectively ranks research output, not educational quality. A university could have mediocre teaching but excellent research labs and score well. For a family choosing where their child will spend three to four years learning, this is a significant mismatch.
ARWU: Nobel Prizes and Nothing Else
The Shanghai Ranking uses six indicators, all research-focused. The 2025 methodology:
| Indicator | Weight | What It Measures |
|---|---|---|
| Alumni winning Nobel Prizes / Fields Medals | 10% | Alumni who won major prizes (weighted by recency) |
| Staff winning Nobel Prizes / Fields Medals | 20% | Current/former staff who won major prizes |
| Highly Cited Researchers | 20% | Number of Clarivate Highly Cited Researchers |
| Papers in Nature and Science | 20% | Publications in two specific journals |
| Papers indexed in SCIE and SSCI | 20% | Total Web of Science publications |
| Per Capita Performance | 10% | Above indicators divided by staff count |
The problem: ARWU measures research prestige, not education. It contains zero indicators related to teaching quality, student experience, graduate outcomes, or learning. A Nobel Prize won by a professor in 1985 tells you nothing about whether your child will receive good supervision in 2027. The methodology also structurally favours large, old, English-speaking research universities — institutions in the humanities, social sciences, or applied fields are systematically disadvantaged because Nature and Science are natural science journals.
Part III: The Scandals
Columbia University: From #2 to Unranked
In February 2022, Michael Thaddeus, a professor of mathematics at Columbia University, published a detailed investigation questioning the data Columbia had submitted to U.S. News & World Report. Columbia had risen to #2 nationally — tied with Harvard and MIT. Thaddeus demonstrated that the university's reported class sizes, faculty qualifications, and spending figures were inconsistent with publicly available data.
By September 2022, Columbia acknowledged that it had "relied on outdated and/or incorrect methodologies" in its data submissions. U.S. News pulled Columbia from its rankings entirely. When Columbia was re-ranked with corrected data, it fell from #2 to #18.
In 2025, Columbia agreed to pay $9 million to settle a class-action lawsuit brought by students who claimed they had been overcharged for their education based on artificially inflated rankings. As of early 2026, Columbia still has not formally admitted to misreporting data.
Thaddeus, in a 2022 Guardian interview, summarised the system: the rankings "are worthless."
Temple University: Criminal Fraud
The Temple University case went further than embarrassment — it ended in prison. Moshe Porat, dean of Temple's Fox School of Business from 1996 to 2018, was convicted of wire fraud in December 2021 for systematically submitting false data to U.S. News to boost the Fox School's online MBA ranking. The falsified data included GMAT scores, undergraduate GPAs, student debt levels, and graduation rates.
Porat was sentenced to 14 months in prison and fined $250,000. Temple paid a $700,000 fine to the U.S. Department of Education and a separate $5.4 million class-action settlement to students. An independent investigation by the law firm Jones Day found that Fox had misreported data for at least six programmes dating back to 2014.
The Harvard-Yale Boycott
In late 2022 and early 2023, a wave of elite institutions withdrew from U.S. News rankings. Yale Law School — ranked #1 for decades — announced in November 2022 that it would stop cooperating, stating the methodology "not only fails to advance the legal profession, but stands squarely in the way of progress." Harvard Law followed. Within weeks, more than 40 law schools had joined the boycott.
In January 2023, Harvard Medical School withdrew, with its dean writing that the rankings could not "meaningfully reflect" the school's standards. Thirteen medical schools followed within two weeks, including Stanford, Columbia, and the University of Pennsylvania.
The boycott forced U.S. News to overhaul its methodology, reducing the weight of reputational surveys. But the fundamental structure — a for-profit company ranking institutions using self-reported, unaudited data — remained unchanged.
Citation Cartels and Fake Affiliations
The gaming extends beyond self-reported data. A 2024 investigation by Science magazine documented "citation cartels" — networks of researchers who systematically cite each other's work to inflate citation metrics. Institutions in China, Saudi Arabia, and Egypt displaced established mathematics departments in Clarivate's Highly Cited Researchers list despite having little mathematical tradition.
A 2024 investigation by El País revealed that Saudi Arabian institutions had paid highly cited foreign scientists to falsely list Saudi universities as their primary affiliation in the Clarivate database — a direct manipulation of the ARWU methodology, which counts Highly Cited Researchers at 20% of its total score. Following the exposé, the number of Highly Cited Researchers affiliated with one Saudi institution dropped from 109 to 76.
In May 2025, a study published in Scientometrics introduced the Research Integrity Risk Index, identifying 21 institutions worldwide whose rapid ranking ascent coincided with bibliometric anomalies: steep declines in first-author rates, surges in STEM output from non-STEM institutions, and elevated rates of publications in subsequently delisted journals. The institutions were concentrated in India, Lebanon, Saudi Arabia, and the UAE.
Part IV: The Structural Problems
The Reputation Feedback Loop
Malcolm Gladwell, in his influential 2011 New Yorker essay "The Order of Things", argued that university rankings are fundamentally incoherent. They attempt to be both comprehensive (using many variables) and heterogeneous (comparing very different institutions) — two goals that are mathematically incompatible. His conclusion: the algorithm is a reflection of wealth and privilege, favouring rich over poor and prestige over substance.
The reputation surveys that drive QS and THE rankings are the clearest example. When you ask 100,000 academics "which universities are excellent?", they name the universities they already believe are excellent — which are the ones that ranked highly last year. Professor Ellen Hazelkorn, one of the world's leading researchers on rankings (Dublin City University, author of Rankings and the Reshaping of Higher Education), has documented how this creates a self-perpetuating hierarchy: "Rankings have generated a perception amongst the public, policymakers and stakeholders that only those within the top 20, 50 or 100 are worthy of being called excellent" — despite the fact that being in the top 500 globally already places an institution in the top 3% of the world's 18,000+ universities.
Goodhart's Law in Action
Goodhart's Law states: "When a measure becomes a target, it ceases to be a good measure." University rankings are a textbook case.
Once institutions understood what the rankings measured, they began optimising for the metrics rather than for educational quality:
- Rejection rates: Some universities encourage more applications specifically to reject more applicants, making themselves appear more selective.
- Class sizes: Columbia's scandal centred on misreporting the percentage of classes with fewer than 20 students — because small class size is a ranking indicator.
- Spending per student: Rankings reward high spending, which incentivises universities to spend more (on facilities, administration, amenities) rather than to deliver education efficiently. This directly drives tuition inflation.
- Citation gaming: The citation cartel phenomenon exists because citations are a ranking input. Researchers and institutions that would never have engaged in reciprocal citation schemes now do so because their institution's ranking — and therefore its funding, its ability to recruit, and its prestige — depends on it.
What Rankings Cannot Measure
No major ranking system includes indicators for:
- Teaching quality as experienced by students (THE uses a reputation proxy; QS uses faculty-student ratio)
- Student learning outcomes (what students actually know or can do after graduating)
- Graduate life satisfaction or career fulfilment
- Mental health support quality
- Accessibility for students with disabilities or from disadvantaged backgrounds
- Curriculum relevance to the student's actual goals
- Quality of supervision for individual students
- Campus culture and community
These are the things that determine whether a university will be good for your child. None of them appear in any major ranking.
Part V: What to Do Instead
Rankings are not useless — they confirm that a university has research infrastructure, international visibility, and a certain baseline of resources. But they cannot tell you whether your child will thrive there. Here is what can.
The Five Questions That Matter More Than Rank
-
"What is the teaching format for first-year students in my child's intended subject?" Large lecture halls with 300 students and no tutorials are a fundamentally different experience from small-group seminars. Rankings do not distinguish between them.
-
"What percentage of graduates in this programme are employed in a field related to their degree within 12 months?" This is more useful than any reputation survey. Ask for programme-specific data, not university-wide averages.
-
"Can I speak to a current student or recent graduate?" If a university cannot or will not connect you with someone who studied what your child wants to study, that is information.
-
"What support exists when things go wrong?" Academic difficulties, mental health crises, financial hardship, homesickness — these are not edge cases. They are the normal experience of university students. Ask specifically what happens when a student is struggling.
-
"What does a typical week look like for a second-year student in this programme?" Not the brochure version. The actual schedule: how many contact hours, how much independent study, what kind of assessment, how much feedback.
A Framework for Comparing Universities Without Rankings
| Dimension | What to Look For | How to Find It |
|---|---|---|
| Teaching intensity | Contact hours, tutorial ratios, assessment frequency | Programme handbook (ask admissions for it) |
| Graduate outcomes | Employment rates by programme, not university average | Government data (UK: Discover Uni; Australia: QILT; Singapore: GES) |
| Student satisfaction | Specific programme ratings, not overall score | National Student Survey (UK), SERU (US/international) |
| Research relevance | Whether research feeds into undergraduate teaching | Ask: "How does faculty research connect to what students learn?" |
| Financial transparency | Total cost including living, hidden fees, scholarship reality | Ask for the median scholarship amount, not the maximum |
| Fit | Culture, city, language environment, support systems | Visit. Talk to students. Read student forums, not brochures. |
When Rankings Are Useful
Rankings have legitimate uses — just not the ones most families assume:
- Filtering from 18,000 to 50: If you know nothing about universities in a country, rankings can help you identify which institutions have research infrastructure and international recognition. This is a starting point, not a conclusion.
- Employer signalling in specific industries: In investment banking, management consulting, and some law firms, university prestige (which correlates with rankings) genuinely affects hiring. If your child wants to work at Goldman Sachs, the brand matters. If they want to be a marine biologist, it does not.
- Research quality for PhD applicants: If your child is pursuing a research career, ARWU's methodology — which measures research output directly — is more relevant than QS or THE. But even then, the specific research group matters more than the university's overall rank.
The Bottom Line
The global university ranking industry is a business. QS is a private company founded by an MBA student. THE is owned by a private equity firm. ARWU is published by a consultancy. All three generate revenue from the institutions they evaluate. Their methodologies measure research output and peer opinion — not teaching quality, student experience, or whether your child will learn to think.
Columbia University paid $9 million for submitting false data. Temple University's dean went to prison. Harvard and Yale boycotted the system. Saudi institutions paid scientists to fake their affiliations. And yet, every year, millions of families treat these numbers as objective truth.
The ranking tells you how a university performs on a set of indicators chosen by a for-profit company. It does not tell you whether your child will be well taught, well supported, or well prepared for the life they want to build. For that, you need different questions — and the willingness to ask them directly.
If you want help evaluating universities based on what actually matters for your child — not what matters for a ranking algorithm — book a free 30-minute consultation.
Sources
- Quacquarelli Symonds. "About Us." QS.com.
- Chirikov, I. (2022). "Does Conflict of Interest Distort Global University Rankings?" Higher Education, 85, 1107–1123.
- UC Berkeley Center for Studies in Higher Education (2021). "Berkeley Study: Major University Rankings May Be Biased."
- Times Higher Education (2026). "World University Rankings 2026: Methodology."
- Times Higher Education (2025). "Times Higher Education Names New Chief Executive Officer."
- ShanghaiRanking Consultancy (2025). "ARWU Methodology 2025."
- Thaddeus, M. (2022). "An Investigation of the Facts Behind Columbia's U.S. News Ranking." Columbia University.
- CNN (2022). "Columbia University Acknowledges Submitting Inaccurate Data."
- Forbes (2025). "Columbia University Offers to Settle Rankings Lawsuit for $9 Million."
- The Guardian (2022). "Columbia Whistleblower on Exposing College Rankings: 'They Are Worthless'."
- U.S. Department of Justice (2022). "Former Temple Business School Dean Sentenced to Over One Year in Prison."
- FBI (2021). "Former Temple Business School Dean Convicted of Fraud."
- The Harvard Crimson (2023). "Rejecting the Rankings: Why Harvard and Yale Led a Widespread Boycott."
- Axios (2023). "Harvard Medical School Drops Out of U.S. News Rankings."
- Science (2024). "Citation Cartels Help Some Mathematicians — and Their Universities — Climb the Rankings."
- El País (2024). "Dozens of the World's Most Cited Scientists Stop Falsely Claiming to Work in Saudi Arabia."
- Hazelkorn, E. (2019). "The Dubious Practice of University Rankings." Elephant in the Lab.
- Gladwell, M. (2011). "The Order of Things." The New Yorker.
- Harvard Business School (2015). "U.S. News & World Report: Driving Value to a Multisided Market."
- Scientometrics (2025). "Gaming the Metrics: Bibliometric Anomalies in Global University Rankings."