By Ruth A. Pagell*
(25 January 2017) Webometrics’ Ranking Web of Universities includes over 26,000 institutions with over 3,200 in the United States alone. How likely is it for new universities, universities in developing countries or your alma mater to reach the top 1,000 let alone the elite top 200? Ruth’s Rankings have focused on describing the strengths and weakness of different global ranking systems and they always end with a caveat about understanding the metrics and the methodology. This article is in reaction to the recent report from the Higher Education Policy Institute of the UK (HEPI) and a brief look at the difference between local and global rankings.
A December article in World University News (O’Malley, 15 Dec 2016) summarized a report from HEPI, “International higher education rankings: For good or ill?” Less than a month later in the same publication, Altbach and Hazelkorn (9 January 2017), two researchers who have been tracking higher education and rankings for over a decade, published “Why most universities should quit the rankings game’ which to me is more valuable than the HEPI report.
HEPI‘s 32-page document focuses on the flaws with commercial higher education rankings. It presents the findings as if this was something new and it includes NO bibliography. I would be disappointed if this were a surprise to the regular readers of scholarly journals such as Scientometrics or of Ruth’s Rankings. It would also not be surprising to anyone using global economic or demographic data.
The HEPI report identifies problems in four categories:
- Impact on decisions
- Data quality and definitions
- Data presentation
Impact on decisions is not the responsibility of the rankings or rankers but of the people who are analyzing and using them. The Altbach and Hazelkorn article is an excellent and concise summary of the misuse of rankings by many higher education institutions.
Some of HEPI’s concerns are that the non-bibliographic metrics are either from surveys or the institutions’ own data and the bibliometric data are all about research. It highlights the importance of reputation surveys to rankings such as QS and THE ands suggests that they be eliminated. In our most recent article Ruth’s Rankings 22 we pointed out the problems of reputation surveys and internationalization metrics on the rankings for Indian universities.
The report uses global rankings from QS, THE, ARWU and U-Multirank. U-Multirank is HEPI’s example of what rankings should be, since it includes teaching as well as research. The HEPI report has a disclaimer that U-Multibank too is flawed. From my perspective, it is the most flawed of them all! For example, the top teaching institutions in the field of chemistry are from Russia and the Ukraine. Under business studies, only two U.S. universities are listed.
The HEPI report acknowledges consistency in data in the research areas, and does not address the bibliometric concerns about size, language and subject. It does not mention Snowball Metrics, a joint effort among Elsevier and universities in the United Kingdom. Snowball metrics use many existing definitions from the UK’s Higher Education Statistical Agency (HESA). HESA also provides standardized subject definitions. HEPI may not be aware of international standard data definitions in the OECD glossary. Differences among education systems even with standard definitions make international comparisons difficult.
The global rankings do emphasize research and reputation. Undergraduate students should be looking at local rankings which use different metrics than the global rankings. However, these rankings often rely on student surveys and institutionally supplied data. From personal experience working with students, their idea of good teaching has more to do with easy grading and amusing lectures than content and knowledge.
Since it is a UK report, I compared league tables from the Guardian and Complete University Guide with the UK rankings from QS, THE and ARWU. I created a list of the top ten from each for a total of 23 institutions. The rankings for the top universities were not as disparate as I had expected. Eight of the top ten were the same on both local lists and four were in the top ten of at least one global list. Only two of the top ten from one of the local lists were not in the top 25 of at least one of the global lists. See Table 23.1 for comparisons, indicators, and analysis.
I used U.S. News and World Report to compare U.S. national and global rankings. It introduced its U.S. rankings over 30 years ago and entered the global rankings arena in 2015. Table 23.2 compares the U.S. News National University rankings with its global rankings, not only for the top ten schools but also for a middle group of universities. There is consistency among the top schools but less with the middle ranked schools. Only four middle universities are ranked higher in the world than in the U.S. Many are relatively lower within the US under the global rankings than with the national rankings. Most of U.S. News Best [U.S.] Colleges’ data is available from public sources. The only common indicator between the two sources is the use of a reputation survey. The only free data in the U.S. survey are tuition figures. More information on methodology is included in Table 23.2.
The HEPI report is disappointing. However, it is a good reminder of the issues surrounding the rankings such as the lack of attention to methodology. Altbach and Hazelkorn advise universities, especially those that are midrange, national, regional and specialist, to quit the rankings game.
Following is my advice to rankings’ stakeholders.
Advice to rankers:
- Include at least scores and ideally underlying data in addition to the ranked list
- Make the methodology easier to find on the web page
- Enable downloading of data so that decision makers can manipulate the data, create their own weightings and make them more relevant to their environment.
- Work with organizations such as OECD and Snowball to standardize definitions and provide a glossary for institutions to use when providing data
Advice to decision makers:
- An expensive solution is to work with Clarivate (WOS, InCites) or Elsevier (Scopus, SciVal) to purchase raw data and create your own rankings
- Look beyond the ordinal ranking. We have mentioned many times that a rank can go down but a score can go up
- Adopt other models such as the flagship model to continue to strive for quality but not necessarily on the global stage.
Advice to students and information professionals serving students:
- Undergraduates should be using local or regional rankings where available. THE has a different U.S. college ranking, with different variables, and the rankers use at least different weights in their regional rankings.
- If users know their field of interest, drill down even further to subject areas
- Check tuition costs, scholarship opportunities and visa requirements which may be more important than any rankings
Advice to researchers:
Since rankings are here to stay and since collaboration is an important bibliometric today, more collaboration between researchers of scientometrics and of higher education policy might lead to more relevant metrics for all users.
It is acknowledged that the metrics used in rankings are flawed and the rankers keep tweaking their methodologies. It is not the fault of the metrics if they are misused and misinterpreted.
Altbach, P. G. and Hazelkorn, E. (8 Jan 2017). Why most universities should quit the rankings game. University World News, (442) accessed 10 Jan 2017 at http://www.universityworldnews.com/article.php?story=20170105122700949
Bekhradnia, B. (December 2016). International university rankings: For good or ill? HEPI Report 89 accessed 10 January 2017 at http://www.hepi.ac.uk/2016/12/15/3734/
Colledge, L. (June 2014). Snowball Metrics Recipe Book, 2nd ed. accessed 30 August, 2015. http://www.snowballmetrics.com/wp-content/uploads/snowball-metrics-recipe-book-upd.pdf
O’Malley, B. (15 Dec 2016). ‘Global university rankings data are flawed’ – HEPI. University World News, #441 accessed 10 January 2017 at http://www.universityworldnews.com/article.php?story=20161215001420225
OECD. Glossary of Statistics terms accessed 13 January at http://stats.oecd.org/glossary/ (Note: need to cut and paste URL)
- Introduction: Unwinding the Web of International Research Rankings
- A Brief History of Rankings and Higher Education Policy
- Bibliometrics: What We Count and How We Count
- The Big Two: Thomson Reuters and Scopus
- Comparing Times Higher Education (THE) and QS Rankings
- Scholarly Rankings from the Asian Perspective
- Asian Institutions Grow in Nature
- Something for Everyone
- Expanding the Measurement of Science: From Citations to Web Visibility to Tweets
- Do-It-Yourself Rankings with InCites
- U S News & World Report Goes Global
- U-Multirank: Is it for “U”?
- A look back before we move forward
- SciVal – Elsevier’s research intelligence – Mastering your metrics
- Analyzing 2015-2016 Updated Rankings and Introducing New Metrics
- The much maligned Journal Impact Factor
- Wikipedia and Google Scholar as Sources for University Rankings – Influence and popularity and open bibliometrics
- Rankings from Down Under – Australia and New Zealand
- Rankings from Down Under Part 2: Drilling Down to Australian and New Zealand Subject Categories
- World Class Universities and the New Flagship University: Reaching for the Rankings or Remodeling for Relevance
- Flagship Universities in Asia: From Bibliometrics to Econometrics and Social Indicators
- Indian University Rankings – The Good the Bad and the Inconsistent
- Are Global Higher Education Rankings Flawed or Misunderstood? A Personal Critique
- Malaysia Higher Education – “Soaring Upward” or Not?
- THE Young University Rankings 2017 – Generational rankings and tips for success
- March Madness –The rankings of U.S universities and their sports
- Reputation, Rankings and Reality: Times Higher Education rolls out 2017 Reputation Rankings
*Ruth A .Pagell is currently an adjunct faculty [teaching] in the Library and Information Science Program at the University of Hawaii. Before joining UH, she was the founding librarian of the Li Ka Shing Library at Singapore Management University. She has written and spoken extensively on various aspects of librarianship, including contributing articles to ACCESS – orcid.org/0000-0003-3238-9674.