By Ruth A. Pagell*
(24 June 2017) I usually do not cover reputation rankings. They are opinion rather than empirical performance. Our last News Flash on the 2018 QS rankings alerted readers to the way QS handles its reputation indicators, which comprise 50% of the composite score. When THE announced its 2017 Reputation Rankings, I decided to investigate the following:
- THE’s stand-alone reputation rankings of the top 100 universities in the world and changes since the first ranking in 2011 (Table 27.1)
- THE’s reputation rankings compared to its World Rankings which have a reputation component (Figure 27.2 combined with Tables 27.2a and 27.2b)
- THE’s Reputation results compared to the QS results (Table 27.3)
Times Higher Education bases the rankings on its Academic Reputation Survey, first reported in 2011. The ranking includes two indicators, Research and Teaching. Research is weighted twice as much as Teaching. The methodology explains how the data are collected and adjusted to reflect geographic and discipline distribution of scholars. See Figure 27.1 for a map showing geographic and subject distribution of responses.Figure-27.1
100 universities are ranked but only the top 50 receive scores. Scoring is straight-forward. The top university receives a score of 100, based on the number of times it is mentioned in the surveys. The following scores are a percent of the top score. For example, Harvard is number one with a score of 100 and the score for 11th ranked University of Tokyo is 26 based on receiving 26% of the responses as Harvard.
There is little change between the 2011 and 2017 rankings. Given the small dataset, there is also little difference between the composite and teaching rankings. Harvard is top on both indicators in 2017 and in 2011. Nine of the top ten in Reputation in the world and eight of the top ten in Asia-Pacific are the same in 2017 as in 2011. See Table 27.1 for the ranking in 2017 and 2011, including a list of all the ranked Asia-Pacific universities
The reputation data are incorporated into the annual World University Rankings as part of the composite score. The survey data make up 18% of the Research indicator and 15% of Teaching. See Table 27.2 for a comparison of world and reputation rankings for the world top 10 and the tops in Asia-Pacific.
REALITY CHECK – Teaching Reputation
Just when I thought this article was finished, the Higher Education Funding Council for England (HEFCE) released its new Teaching Excellence Framework (TEF). THE presented the results of TEF along with its own ranks from the World University Ranking 2016-2017, but not from its Teaching Reputation. See Appendix 27A for comparisons of THE Teaching reputation and local rankings from the U.K., U.S. and China.
COMPARING THE and QS
The two rankers use different survey methodologies and scoring protocols. Eight of the top ten in the world and in Asia are the same for THE’s Research indicator and QS’s Academic indicator, which includes both research and teaching. QS ranks and scores 400 of its 959 universities. The top 11 in the academic ranking all have scores of 100 and the lowest ranked university has a score of 27.3. Marginson (2014), in his evaluation of current ranking systems, criticizes both systems and the use of reputation as a metric.
I started this as a quick look at an indicator generally ignored in the bibliometric literature but with better coverage in the higher education literature. (Note 1) I am skeptical about the relationship between reputation and reality.
Marginson (2014) emphasizes the need for social scientists to become more involved in university rankings research to improve the quality of the overall ranking.
I have two personal recommendations:
1) Users and authors should be more critical of two of the most popular rankings systems, QS and THE which base 50 percent and 33% of their rankings on reputation surveys.
2) Building on Marginson’s concern, more multi-disciplinary research is needed on rankings in general and the effect of reputation on the rankings in particular.
I still am not sure what comes first, the reputation or the rankings.
- Higher education literature covers the search term “reputation and universities and rankings” with little coverage in the information science literature. Only eight articles from Scientometrics combined the three terms (Scopus and Web of Science, 21 June 2017). Search for “citations and universities and rankings” and six percent of the results are from higher education literature. Combining “reputation and citations with universities and rankings” yielded only 47 articles.
- Bowman, N.A. and Bastedo, M.N. (2011). Anchoring effects in world university rankings: exploring biases in reputational scores. Higher Education 61:431-444 access at http://bcct.unam.mx/adriana/bibliografia%20parte%201/BCCT%20915.pdf
“Once reputational assessments are formed, they are often quite difﬁcult to change without speciﬁc evidence to the contrary.”
- Hazelkorn, E. (2008) Learning to live with league tables and ranking: the experience of institutional leaders. Higher Education Policy, vol.21, pp.193-216. doi:10.21427/D7PP7Z, accessed at http://arrow.dit.ie/cgi/viewcontent.cgi?article=1038&context=cserart
“ Rankings have placed a new premium on status and elite institutions, reinforcing reputation and vice-versa, with a strong bias towards long established and well-endowed institutions.” pg. 28.
- Marginson, S. (2014). University Rankings and Social Science. European Journal of Education accessed at http://repositorio.minedu.gob.pe/bitstream/handle/123456789/2905/University%20Rankings%20and%20Social%20Science.pdf?sequence=1
“Rankings have an irreducible reputation making role”…ground that role performance..”rather than use comparisons in which reputation drives reputation in a circular effect” pg.56
- Safon, V. (2013) What do global university rankings really measure? The search for the X factor and the X entity. Scientometrics, November 2013, Volume 97, Issue 2, pp 223–244
“…rankings undermine meritocracy, reinforcing the reputation of old universities and thus rewarding past rather than present achievements. (pg. 230). Safon suggests a vicious circle between reputation and ranking.
- Introduction: Unwinding the Web of International Research Rankings
- A Brief History of Rankings and Higher Education Policy
- Bibliometrics: What We Count and How We Count
- The Big Two: Thomson Reuters and Scopus
- Comparing Times Higher Education (THE) and QS Rankings
- Scholarly Rankings from the Asian Perspective
- Asian Institutions Grow in Nature
- Something for Everyone
- Expanding the Measurement of Science: From Citations to Web Visibility to Tweets
- Do-It-Yourself Rankings with InCites
- U S News & World Report Goes Global
- U-Multirank: Is it for “U”?
- A look back before we move forward
- SciVal – Elsevier’s research intelligence – Mastering your metrics
- Analyzing 2015-2016 Updated Rankings and Introducing New Metrics
- The much maligned Journal Impact Factor
- Wikipedia and Google Scholar as Sources for University Rankings – Influence and popularity and open bibliometrics
- Rankings from Down Under – Australia and New Zealand
- Rankings from Down Under Part 2: Drilling Down to Australian and New Zealand Subject Categories
- World Class Universities and the New Flagship University: Reaching for the Rankings or Remodeling for Relevance
- Flagship Universities in Asia: From Bibliometrics to Econometrics and Social Indicators
- Indian University Rankings – The Good the Bad and the Inconsistent
- Are Global Higher Education Rankings Flawed or Misunderstood? A Personal Critique
- Malaysia Higher Education – “Soaring Upward” or Not?
- THE Young University Rankings 2017 – Generational rankings and tips for success
- March Madness –The rankings of U.S universities and their sports
- Reputation, Rankings and Reality: Times Higher Education rolls out 2017 Reputation Rankings
- Japanese Universities: Is the sun setting on Japanese higher education?
- From Bibliometrics to Geopolitics: An Overview of Global Rankings and the Geopolitics of Higher Education edited by Ellen Hazelkorn
- Hong Kong and Singapore: Is Success Sustainable?
- The Business of Rankings – Show me the money
- Authors: People and processes
- Authors: Part 2 – Who are you?
- Come together: May updates lead to an investigation of Collaboration
- Innovation, Automation, and Technology Part 1: From Scholarly Articles to Patents; Innovation, Automation, and Technology Part 2: Innovative Companies and Countries
- How Important are Journal Quality Metrics in the Era of Predatory Journals? Part 1: Journal Citation Metrics; Part 2: How Important are Journal Quality Metrics in the Era of Potential/ possible/ probable predatory publishers and publications?
- Coming Attractions: The UN Sustainable Development Goals and Times Higher Education Innovation and Impact Rankings Demystified
- Business School Rankings: Monkey Business for an Asia/Pac audience
- Deconstructing QS Subjects and Surveys
- THE’s University Impact Rankings and Sustainable Development Goals: Are these the most impactful universities in the world?
- ASEAN – a special analysis of ASEAN nations
- Predatory practices revisited – misunderstandings and positive actions
- Part 1: What’s the best university for an international student? Metrics from the student’s perspective
*Ruth A .Pagell is emeritus faculty librarian at Emory University. After working at Emory she was the founding librarian of the Li Ka Shing Library at Singapore Management University and then adjunct faculty [teaching] in the Library and Information Science Program at the University of Hawaii. She has written and spoken extensively on various aspects of librarianship, including contributing articles to ACCESS – https://orcid.org/0000-0003-3238-9674.