Ruth’s Rankings 27: Reputation, Rankings and Reality – Times Higher Education rolls out 2017 Reputation Rankings

By Ruth A. Pagell*

(24 June 2017) I usually do not cover reputation rankings. They are opinion rather than empirical performance.  Our last News Flash on the 2018 QS rankings alerted readers to the way QS handles its reputation indicators, which comprise 50% of the composite score.  When THE announced its 2017 Reputation Rankings, I decided to investigate the following:

  • THE’s stand-alone reputation rankings of the top 100 universities in the world and changes since the first ranking in 2011 (Table 27.1)
  • THE’s reputation rankings compared to its World Rankings which have a reputation component (Figure 27.2 combined with Tables 27.2a and 27.2b)
  • THE’s Reputation results compared to the QS results (Table 27.3)

Times Higher Education bases the rankings on its Academic Reputation Survey, first reported in 2011. The ranking includes two indicators, Research and Teaching.  Research is weighted twice as much as Teaching. The  methodology explains how the data are collected and adjusted to reflect geographic and discipline distribution of scholars.  See Figure 27.1 for a map showing geographic and subject distribution of responses.


100 universities are ranked but only the top 50 receive scores.  Scoring is straight-forward.  The top university receives a score of 100, based on the number of times it is mentioned in the surveys. The following scores are a percent of the top score. For example, Harvard is number one with a score of 100 and the score for 11th ranked University of Tokyo is 26 based on receiving 26% of the responses as Harvard.

There is little change between the 2011 and 2017 rankings.  Given the small dataset, there is also little difference between the composite and teaching rankings. Harvard is top on both indicators in 2017 and in 2011. Nine of the top ten in Reputation in the world and eight of the top ten in Asia-Pacific are the same in 2017 as in 2011.  See Table 27.1 for the ranking in 2017 and 2011, including a list of all the ranked Asia-Pacific universities

The reputation data are incorporated into the annual World University Rankings as part of the composite score. The survey data make up 18% of the Research indicator and 15% of Teaching. See Table 27.2 for a comparison of world and reputation rankings for the world top 10 and the tops in Asia-Pacific.

REALITY CHECK – Teaching Reputation

Just when  I thought  this article was finished, the Higher Education Funding Council for England (HEFCE) released its new Teaching Excellence Framework (TEF).  THE presented the results of TEF along with its own ranks from the World University Ranking 2016-2017, but not from its Teaching Reputation. See Appendix 27A for comparisons of THE Teaching reputation and local rankings from the U.K., U.S. and China. 


The two rankers use different survey methodologies and scoring protocols. Eight of the top ten in the world and in Asia are the same for THE’s Research indicator and QS’s Academic indicator, which includes both research and teaching.  QS ranks and scores 400 of its 959 universities. The top 11 in the academic ranking all have scores of 100 and the lowest ranked university has a score of 27.3.  Marginson (2014), in his evaluation of current ranking systems, criticizes both systems and the use of reputation as a metric.


I started this as a quick look at an indicator generally ignored in the bibliometric literature but with better coverage in the higher education literature.  (Note 1)   I am skeptical about the relationship between reputation and reality.

Marginson (2014) emphasizes the need for social scientists to become more involved in university rankings research to improve the quality of the overall ranking.

I have two personal recommendations:

1)     Users and authors should be more critical of two of the most popular rankings systems, QS and THE which base 50 percent and 33% of their rankings on reputation surveys.

2)     Building on Marginson’s concern, more multi-disciplinary research is needed on rankings in general and the effect of reputation on the rankings in particular.

I still am not sure what comes first, the reputation or the rankings.


  1. Higher education literature covers the search term “reputation and universities and rankings” with little coverage in the information science literature. Only eight articles from Scientometrics combined the three terms (Scopus and Web of Science, 21 June 2017).  Search for “citations and universities and rankings” and six percent of the results are from higher education literature. Combining “reputation and citations with universities and rankings” yielded only 47 articles.


“Once reputational assessments are formed, they are often quite difficult to change without specific evidence to the contrary.”

“ Rankings have placed a new premium on status and elite institutions, reinforcing reputation and vice-versa, with a strong bias towards long established and well-endowed institutions.” pg. 28.

“Rankings have an irreducible reputation making role”…ground that role  performance..”rather than use comparisons in which reputation drives reputation in a circular effect” pg.56

“…rankings undermine meritocracy, reinforcing the reputation of old universities and thus rewarding past rather than present achievements. (pg. 230). Safon suggests a vicious circle between reputation and ranking.

Ruth’s Rankings

A list of Ruth’s Rankings is here.

*Ruth A .Pagell is emeritus faculty librarian at Emory University.  After working at Emory she was the founding librarian of the Li Ka Shing Library at Singapore Management University and then adjunct faculty [teaching] in the Library and Information Science Program at the University of  Hawaii.  She has written and spoken extensively on various aspects of librarianship, including contributing articles to ACCESS –