Context
The Academic Reputation Index is the centerpiece of the QS World University RankingsĀ® carrying a weighting of 40%. It is an approach to international university evaluation that QS pioneered in 2004 and is the component that attracts the greatest interest and scrutiny. In concert with the Employer Reputation Index it is the aspect which sets this ranking most clearly apart from any other. It seeks to answer the powerful question of which universities are performing world-class research. The answer to this question not only illuminates the quality of the research, but the strength of the university in communicating that research, and the strength of the impact the research makes across the world.
Source of Respondents
The results are based on the responses to a survey distributed worldwide academics from a number of different sources:
- Previous Respondents
- Submitted contact lists from institutions (see Survey Nominations Procedure)
- Sign-ups on our sign-up facility (see Survey Nominations Procedure)
- IBIS database - see IBIS
The Survey
The survey is sent to many thousands of global academics each year. It has largely followed the same principles since inception, with some variation depending on academic themes of interest over time. At the beginning of the survey, academics state their field specialism and their regional familiarity. The answers to this then guide the range of answers they can give in the remainder of the survey. We ask the following questions of each respondent:
Individual Characteristics
- Their name
- Their institution
- Their job
- The number of years they have been in academia
Knowledge Specification
- Which country/territory they are most familiar with, from an academic perspective. This will define the list of institutions from which the respondent can nominate domestically or internationally.
- Which region(s) they are most familiar with, from an academic perspective. Regional knowledge responses are grouped into three supersets that define the list of institutions from which the respondent can select, these are Americas, APAC (Asia, Australia & New Zealand) and EMEA (Europe, Middle East & Africa).
- The faculty area in which they are most active and knowledgeable .
- The specific field (up to a maximum of two)* that they specialize in.
* Certain QS Subjects are not explicitly present in the survey form. This includes Geology, Geophysics and Petroleum Engineering. In such cases we derive their nominations and further transformations (see below) from the corresponding proxy field of study, which is available in the survey form: Geology and Geophysics are fully derived from Earth & Marine Sciences, while Petroleum Engineering is a weighted sum of Chemistry (5%), Environmental Sciences (5%), Earth & Marine Sciences (30%), Chemical Engineering (30%) and Electrical & Electronic Engineering (30%).
Top Domestic Institutions
- Academics are asked to nominate up to 10 institutions from their country/territory of knowledge that they think are producing the top research in their faculty area. Their own institution is not available for selection.
Top International Institutions
- Academics are asked to nominate up to 30 institutions outside of their country/territory of knowledge that they think are producing the top research in their faculty area. Their own institution is excluded. The list consists solely of institutions from the region(s) with which they express familiarity with.
Additional Questions
To answer certain higher education insight needs, or to receive feedback on our products, other additional questions may be asked. These questions necessarily vary from year to year, and are not shared in advance of our survey.
If an academic respondent selects Business & Management, Accounting & Finance or Marketing as their field (narrow subject) of knowledge in the main track of the academic survey, then we ask what level of education they are primary focused on in their current role (Undergraduate, Masters, Doctoral, etc.). If Masters level is selected, then the business school track of the academic survey commences.
Top Business Schools
Academics are asked to identify up to 10 business schools (autocomplete), either domestic or international, that they regard as producing the best research in your field(s) of expertise. Their own institution is excluded. The list consists of all business schools (both standalone and child institutions), regardless of the region of knowledge selected in the main track of the survey.
Data cleaning and validity checks
Once the survey has been collated, a variety of checks and balances are performed to ensure the responses are valid, useable and complete. For reasons of data integrity and to prevent attempts to game the process, we do not publish the comprehensive list of our checks and validations.
Step by Step Analysis
Once the responses have all been processed, we apply the following procedures for all of the nominations for each of our five broad faculty areas (in case of QS World University Rankings, QS University Rankings by Region or QS Rankings by Faculty) or for each of our individual narrow subject areas (in case of QS Rankings by Subject).
- Regional Familiarity and Faculty Knowledge Weights
- Devise and apply weightings based on the regional and faculty familiarity of respondents. This is done to balance the representation of three regional super sets (see above) in our surveys. Respondents are able to relate to more than one region. The aim here is to ensure that over-represented regions and faculty areas are not obscuring nominations from less represented regions and faculty areas.
- Country Weights
- Devise weightings based on the location with which respondents consider themselves familiar. Here we look at the number of well recognized institutions in the location per response originated from it, such that high nominator values would tend to have high denominator values. We largely expect the volume of responses from a country to correlate with its international recognition. Locations with a low participation rate are exempted from this to avoid small number effects.
- International Weighted Count
- Derive a weighted count of international nominations for each institution (excluding self-nominations). Here, we use a 5 year aggregation of nominations, where the earlier two years count for 25% (year 5) and 50% (year 4), and the most recent three years at full 100% weight.
- Domestic Weighted Count
- Derive a weighted count of domestic nominations for each institution (excluding self-nominations). This is adjusted against the number of institutions from that country with a certain level of international nominations and the total volume of responses from that country. Larger countries with more recognized institutions naturally face more competition in terms of gaining nominations, and this is designed to reflect and reward this.
- Normalize both domestic and international count to achieve a score out of 100.
- Combine the two scores with the relevant weights (see the table below)
- Various transformation techniques applied to minimize the impact of outliers and scale the numbers to present a score out of 100 for the given faculty area.
QS World University Rankings and QS University Rankings by Region
The scores across the five faculty areas are then combined with an equal weighting to produce the final score per institution for Academic Reputation. The adopted assumption here is that, in a typical international comprehensive university, each of these faculty areas represents a roughly equitable share of activity. Looking at the distribution of students might inspire a great emphasis on Arts & Humanities and Social Sciences in many countries, whilst looking at the allocation of research funding would lean towards medicine and sciences where research is, typically, more expensive. Thus, equalizing these faculty areas for Academic Reputation seems a fair and balanced approach. In other words, institutions that see a skewed distribution of nominations across faculty areas may perform less well than those with a flatter distribution.
QS Subject Rankings
In our Subject Rankings, there is a possibility that institutions with well-known strengths in a given discipline may be undervalued with respect to comprehensive institutions with a strong overall reputation and research profile. An example could be a specialized Art & Design institution vs. a large multidisciplinary university. To address this, and better identify institutions with key strengths in a particular area, we apply the following adjustments where relevant.
- We look at the divergence between academic reputation in the specific subject and academic reputation in the corresponding broad subject area (or between the academic reputation in the broad subject area and overall academic reputation, for broad subject area rankings). This means that the academic reputation scores of institutions that fare better in the specific discipline than in the associated broad faculty area are given a proportional boost, while those that fare worse have those shortfalls proportionally amplified. The result is that the key strengths of institutions shine brighter and less credit is attributed to overall reputation and strength in adjoining disciplines.
- An extra boost may be applied to institutions identified as Specialists (see QS Institution Classification), if they offer academic programs in the relevant subject area (for QS Rankings by Subject) or faculty area (for QS Rankings by Faculty).
- Responses from academics expressing knowledge of a single specific discipline are given additional weight.
Domestic and international nomination weights used in various rankings
Rankings |
Domestic nominations |
International nominations |
QS World University Rankings and QS University Rankings by Region* |
15% |
85% |
QS Subject Rankings* |
33% |
67% |
QS Global MBA Rankings |
30% |
70% |
QS Business Masters Rankings |
60% |
40% |
QS Executive MBA Rankings |
50% |
50% |
QS Online MBA Rankings |
50% |
50% |
* As a general principle, we expect the volume of responses from a country to correlate with the number of institutions available in our ranking, and particularly the number of high-performing institutions (impact). If, however, an anomalous number of responses are showing from a country that does not achieve this 'volume by impact' measure, we inspect the nominations more thoroughly. If the highest nominating country is a neighboring country, which, in turn, provides more than 10% of all the international nominations received by that neighbor, we adjust these mutual nominations to a weight of 15% in the analysis.
In Business School Rankings, the analysis follows the same step-by-step procedure, with the following caveats:
- We do not break down our analysis by faculty area
- A regional weighting (step 1) is not applied
- Standalone business schools that receive nominations are boosted to combat the advantage that affiliate/child schools have due to the halo effect that may exist from the parent institution.
- If a business school or its parent institution (any) was nominated in the main track of the survey in one of the business-related subject areas (see below), then the business school is rewarded additionally, to reflect its broader brand awareness.
Mapping between subject areas available for selection in the QS Academic Survey and QS Global MBA Rankings (MBA); QS Business Masters Rankings: Masters in Management (MIM), Masters in Finance (MIF), Masters in Business Analytics (MSB), Masters in Marketing (MMK), Masters in Supply Chain Management (MSM); QS Online MBA Rankings (OMBA); QS Executive MBA Rankings (EMBA)
Subject area | MBA | MIM | MIF | MSB | MMK | MSM | OMBA | EMBA |
Accounting & finance | V | V | V | V | V | |||
Business & management studies | V | V | V | V | V | V | V | V |
Communication, cultural & Media studies | V | |||||||
Computer science | V | |||||||
Economics & econometrics | V | V | V | V | ||||
Marketing | V | V | V | V | ||||
Mathematics | V | |||||||
Statistics & operational research | V | V | V |