We calculate scores on the subset of all analysed institutions, as opposed to the subset of finally published (ranked) institutions. This means that a distribution of indicator or overall scores may not be from 0 to 100 in the publicly available results table.
Another consequence of that for rankings with increased sample sizes of analysed institutions, score distribution may change (stretch), so that same score may lead to a higher a rank in one edition, but to a lower rank in another one.
To understand how rankings scores are calculated, please see the following process:
- We use the raw ratio or index as the original input
- We apply normalization to all institutions’ ratios/indexes to standardize the input, generating a Z-Score for each institution;
- We scale the Z Score for all the institutions from 0-100.
Ratio/Index – Input – Z Score – Scaled Score
More technical details are available here: https://support.qs.com/hc/en-gb/articles/4402503754130-Z-Score-Normalization.
Publication of Scores and Ranks:
Every institution has a published rank. It is either unique, joint, or in a band.
All indicator ranks are published and are specific up to a certain point and then banded thereafter. All indicator ranks are based on the underlying ratios or indices behind the corresponding scores, not the scores themselves. This doesn't affect the rankings results, but this does allow institutions more opportunity to showcase their performance, and rankings users to analyze their performance at a more granular level.
All indicator scores are displayed for all ranked institutions.
Overall scores are displayed to a certain point (e.g. Rank 500 in our World University Ranking) and hidden thereafter.
Scores and Rankings in subregional rankings
An overall rank in the overall rankings is transformed into an overall rank in the corresponding sub-regional rankings, using same overall scores as in the overall rankings. Indicator ranks in the sub-regional rankings are not available.