The calculation of indicator scores
- Firstly, the widely used z-score normalization (or standardization) is applied. A z-value shows the number of standard deviations a given data point lies above (positive z-value), below (negative z-value) or exactly on the mean (zero z-value)*.
- Once z-scores are calculated, their position on the normal curve is plotted resulting in the scaled value from 0 to 1 for each indicator, showing the probability that an indicator value of a random institution from the population is less than or equal to the given value. For example, if an institution has Academic Reputation at this step of calculations equal to 0.9, this will indicate that it performs in the top 10% of institutions by this indicator.
- The resulting scores are finally linearly scaled between 1 and 100 for each indicator using min-max normalization.
The calculation of overall scores
After each indicator’s data is compatible with each other, we can combine the data reliably and apply weightings fairly to the calculation of the overall score.
The final overall score is scaled again using min-max normalization, where 100 goes to the maximum overall score, and 1 goes to the minimum overall score. For MBA and Business Masters Rankings, it was scaled from 20-100.
*As the number of institutions in a given ranking grows, this can have an inconsistent effect on how z-score normalization applies to different indicators. Typically, new institutions added will tend to have weaker performances in reputation and research indicators, but may have strengths in faculty student ratio or the international measures. The effect of this has been to bring down the means used for indicators more closely correlated with overall performance at a faster rate than those with indicators less strongly correlated. From 2016, we have locked the mean and standard deviation used for the standardization calculations in the QS World University Rankings to the top X in any given indicator (e.g. X=700 for Citations per Faculty), not including capped values. An impact from this is to space out the institutions above the mean a little more and another is that it is typical for an institution in the same rank position to have a lower score than previously in any given indicator.