Our Methodology
How we evaluate universities — and why we do it differently.
Why We Built This
We published an investigation into commercial rankings and found that QS, THE, and Shanghai measure what matters to universities (research output, reputation surveys, marketing spend) — not what matters to families (teaching quality, graduate outcomes, financial stability, visa pathways).
This guide is our attempt at a better alternative. It's imperfect — we acknowledge that openly — but it's transparent, independent, and focused on the factors that research shows actually predict outcomes.
Our Principles
- No payments from universities. We don't sell consulting, advertising, or data services to the institutions we evaluate.
- Public data only. Every data point comes from government statistics, published surveys, or open databases. Anyone can verify our work.
- Tier profiles, not ordinal ranks. We use S/A/B/C/D performance bands per dimension. A university ranked "47th" vs "53rd" is meaningless noise — we refuse to create that false precision.
- Missing data is never penalized. If we don't have data for a dimension, we mark it "unavailable" (⚪). We never assume absence of data means poor performance.
- Transparent limitations. We state what we can and cannot measure. No ranking system is complete.
The Six Dimensions
1. Network Strength
Does each graduate strengthen the network for all?
We assess alumni network density in key industries and geographies. A strong network creates compounding value — each successful alumnus makes the degree more valuable for future graduates.
Data sources: LinkedIn alumni data, employer hiring pipeline reports, alumni giving rates, industry placement statistics.
2. Employability
Where do graduates actually end up — and can they stay and work?
Not just "employed" but employed in what? We look at employment in skilled roles, earnings data, placement in growth sectors, post-study work visa pathways, and employer pipelines.
Data sources: HESA Graduate Outcomes Survey (UK), QILT GOS (Australia), JAUGES (Singapore), College Scorecard (US), LEO longitudinal earnings data (UK), government visa policy.
3. Teaching Quality
Are the best minds actually teaching — or just publishing?
Student-faculty ratios, student satisfaction with teaching specifically, and whether the institution rewards teaching excellence.
Data sources: National Student Survey/NSS (UK), QILT Student Experience Survey (Australia), student-faculty ratio data, teaching award records.
4. Curriculum Relevance
Does this institution prepare students for the real world — or just for exams?
We assess six indicators of curriculum quality:
- Problem-centered vs subject-centered: Does the curriculum teach through real-world problems (why and how) or purely through academic subjects (why only)?
- Assessment style: Project-based and portfolio assessment vs exam-heavy? Students with project experience can demonstrate workplace skills on their CV.
- Industry partnerships: Does the institution have professors of practice, corporate advisory boards, or co-designed programs with employers?
- Placement integration: Is work experience credit-bearing and embedded in the degree — not just an optional extra?
- Soft-skill embedding: Are communication, teamwork, and reflective practice woven into the curriculum, or treated as separate "employability" add-ons?
- Labour market mapping: Does the institution actively map curriculum content to in-demand skills (e.g., Python for Finance, AI integration across disciplines)?
Data sources: Program catalogs, curriculum structure analysis, industry partnership announcements, placement/internship integration data, course delivery methods.
5. Institutional Health
Can this institution survive a single external shock?
Financial sustainability, revenue diversification, enrollment trends, and regulatory standing. A prestigious university running a deficit may not deliver the same experience in year 3 of your child's degree.
Data sources: OfS financial reports (UK), TEQSA (Australia), published financial statements, enrollment trend data, regulatory filings.
6. Student Experience
Will your child belong here? Will they be supported?
International student support, campus culture, completion rates for international students, and overall student satisfaction.
Data sources: International student retention data, student satisfaction surveys, completion rate data, student reviews.
The Tier System
| Tier | Meaning | Approximate Distribution |
|---|---|---|
| S | Exceptional — among the very best globally in this dimension | ~5% of universities |
| A | Excellent — significantly above average | ~20% |
| B | Strong — above average, solid performance | ~35% |
| C | Good — average performance | ~25% |
| D | Limited data or developing — insufficient data to assess confidently | ~15% |
Tiers are assigned per dimension, not as an overall composite. A university can be S-tier for Graduate Outcomes but C-tier for Curriculum Relevance. This reflects reality — universities excel in different ways.
Data Quality Flags
Every dimension on every university profile includes a data quality indicator:
- 🟢 Full data — comprehensive, recent, from authoritative sources
- 🟡 Partial data — some data available but incomplete or dated
- ⚪ Unavailable — insufficient public data to assess
Scoring Rubrics
Below are the indicative thresholds we use when assigning tiers. These are based on national benchmarks from official data sources — not arbitrary cutoffs.
Employability
| Tier | UK (Highly Skilled Employment, 15mo) | Australia (FT Employment, 4mo) | US (Median Earnings, 10yr) |
|---|---|---|---|
| S | ≥90% | ≥85% | ≥$80,000 |
| A | 80–89% | 78–84% | $60,000–$79,999 |
| B | 72–79% | 72–77% | $45,000–$59,999 |
| C | 60–71% | 65–71% | $35,000–$44,999 |
| D | <60% | <65% | <$35,000 |
Sources: HESA Graduate Outcomes 2022/23, QILT GOS 2024, US College Scorecard 2026. UK sector average: 72%. Australia national average: 74%. US "Do No Harm" threshold: ~$35,000.
Teaching Quality
| Tier | Student Satisfaction (UK NSS / AU QILT) | Student-to-Faculty Ratio |
|---|---|---|
| S | ≥88% / ≥84% | ≤8:1 |
| A | 85–87% / 79–83% | 9:1 – 12:1 |
| B | 80–84% / 74–78% | 13:1 – 18:1 |
| C | 75–79% / 68–73% | 19:1 – 25:1 |
| D | <75% / <68% | >25:1 |
Sources: NSS 2024 (UK sector average: 85%), QILT SES 2024 (AU average: 76.5%), NCES (US average ratio: 18:1).
Student Experience
| Tier | International Student Completion Rate |
|---|---|
| S | ≥90% |
| A | 80–89% |
| B | 70–79% |
| C | 55–69% |
| D | <55% |
Note: Network Strength, Curriculum Relevance, and Institutional Health use qualitative assessment frameworks rather than single-metric thresholds. We are developing quantitative rubrics for these dimensions.
Cross-Country Normalization
Comparing universities across countries is inherently difficult. Different nations survey graduates at different times (UK: 15 months, Australia: 4 months, Japan: at graduation), use different definitions of "employed," and operate in different labor markets.
Our approach:
- Country-relative benchmarking. We compare each university to its national average first. A university scoring 85% employment in Australia (where the average is 74%) is performing comparably to one scoring 92% in the UK (where the average is 88%).
- Exclude non-seekers. Graduates in further study, caring responsibilities, or military service are removed from employment calculations — they aren't "unemployed."
- Flag incomparable data. When survey timings differ by more than 6 months, we note this with a 🟡 data quality flag rather than presenting figures as directly comparable.
- No single composite score. By keeping dimensions separate, we avoid the false precision of combining incomparable metrics into one number.
We do not use PPP salary adjustments (these systematically favor low-cost countries and distort at city level). Instead, we report salary data as percentile position within the national graduate distribution.
Frequently Asked Questions
Why no overall score or single ranking number?
Because combining teaching quality, employability, and student experience into one number requires arbitrary weighting. A university that's S-tier for Network Strength but C-tier for Student Experience isn't "B-tier overall" — it's excellent for some students and poor for others. We show the full profile so families can weight dimensions according to their own priorities.
How often is data updated?
Annually, within 30 days of new government data releases (HESA, QILT, College Scorecard). Our current data vintage: HESA 2022/23, QILT 2024, College Scorecard March 2026.
Can universities request changes to their tier?
Yes — but only by providing verifiable public data sources that we missed. We will never change a tier in exchange for payment, partnership, or advertising. Universities can contact us with corrections.
Why do you exclude reputation surveys?
Reputation surveys (used by QS and THE for 15–33% of their total score) measure brand recognition, not educational quality. They reward marketing spend, create self-reinforcing loops (famous universities stay famous), and penalize excellent institutions that lack global name recognition. Research shows reputation scores correlate more with institutional wealth than with student outcomes.
How do you handle universities with limited data?
We mark dimensions as ⚪ (unavailable) rather than penalizing with a low score. A university with limited public data isn't necessarily poor — it may simply operate in a country with less transparent reporting. We never infer quality from absence of data.
Why tier bands (S/A/B/C/D) instead of precise ranks?
The difference between the "47th" and "53rd" best university is statistical noise. Tier bands communicate meaningful performance differences without implying false precision. This is the same principle THE uses when banding universities outside the top 200.
Data Sources
- HESA Graduate Outcomes Survey — UK employment and earnings (15 months post-graduation)
- QILT Graduate Outcomes Survey — Australian employment (4–6 months post-graduation)
- US College Scorecard — Earnings, debt, completion rates (US federal data)
- National Student Survey (NSS) — UK student satisfaction
- QILT Student Experience Survey — Australian student satisfaction
- IPEDS — US institutional data (enrollment, ratios, completion)
- Office for Students (OfS) — UK regulatory and financial data
- TEQSA — Australian higher education quality and regulation
Graduate Outcome Data
Where available, university profiles display graduate employment rates and median salary data from official government sources. This data is separate from our tier assessments — it provides additional context for families evaluating ROI.
| Country | Source | Metrics | Timing |
|---|---|---|---|
| US | College Scorecard (Dept. of Education) | Median earnings at 6 & 10 years, completion rate, admission rate | 10 years after entry |
| UK | LEO Provider-Level Data (DfE/HMRC) | Median salary, sustained employment rate | 1 year after graduation |
| Australia | QILT Graduate Outcomes Survey | FT employment rate, median salary | 4-6 months after graduation |
| Singapore | Graduate Employment Survey (MOE) | Median monthly salary, employment rate | 6 months after graduation |
| Canada | Ontario/BC Graduate Surveys | Median salary, employment rate | 2 years after graduation |
| Hong Kong | University Graduate Employment Surveys | Median monthly salary, employment rate | 6 months after graduation |
| Japan | MEXT School Basic Survey | Employment decision rate only | At graduation |
| Europe | University career services & national surveys | Varies by institution | Varies |
Important: These figures are not directly comparable across countries due to different survey timings, definitions of "employed," and labour market structures. We display the data source and timing on each profile so families can interpret appropriately.
Universities without publicly available outcome data show a "⚪ unavailable" indicator. This does not imply poor outcomes — many excellent institutions (particularly in Germany, Switzerland, and the Netherlands) simply do not publish institution-level graduate data.
Version History
- v1.2 — May 2026: Added graduate outcome data (salary, employment) to 156 university profiles across 8 countries.
- v1.1 — May 2026: Added scoring rubrics, cross-country normalization explanation, FAQ, linked data sources.
- v1.0 — April 2026: Initial methodology published with 200 universities across 15 countries.
What We Don't Measure
We are honest about our limitations:
- Reputation — we deliberately exclude reputation surveys (they reward marketing spend and create self-reinforcing loops)
- Research citations — these measure research output, not educational quality. A brilliant researcher may be a poor teacher.
- Campus beauty — subjective and irrelevant to outcomes
- Sports programs — not relevant to our audience's decision criteria
- Individual program quality — our current assessment is at institutional level. Program-level data is a future goal.
How to Use This Guide
- Identify which dimensions matter most to your family
- Filter universities by country, curriculum acceptance, or dimension
- Compare tier profiles across your shortlist
- Read our framework article for deeper guidance on evaluation
- Use this as a starting point — not a final answer. Visit campuses, talk to current students, and assess fit personally.
Feedback & Updates
This guide is updated annually. We welcome corrections, suggestions, and criticism. If you represent a university and believe our assessment is missing data, you can contact us with verifiable public sources.
We will never change a tier assessment in exchange for payment. Period.