Ruth’s Rankings 8: Something for Everyone

By Ruth A. Pagell*

(27 February 2015) Question:  What do Tokyo University, Kyoto University of Education, Tsinghua University, Universiti Kebangsaan Malaysia (National University of Malaysia) and Chongqing University of Arts and Sciences all have in common?

Answer:  Using different sources and different metrics, they are all number one in higher education depending  on the  metric. Find the specific metrics in the article.

The two rankings we feature this month come from the Spanish Scimago Lab, and Leiden University’s Center for Science and Technology Studies (CWTS – NL).  They manipulate the bibliometric data from Scopus and Web of Science to account for issues such as size, field and collaboration. By using Scopus data, the Scimago rankings include many more institutions.

This article should appeal to those interested in the math behind the rankings and to those whose institutions have not appeared in any rankings so far. Since much of the work from both institutions is mathematical, we will illustrate how using these different methods affect the rankings and then provide links to more technical information for those who are interested.  These rankings receive little attention in the international press.  Check out Ruth’s Rankings 3 and 4 for background details (Bibliometrics and Data Sources).

Even before rankers applied bibliometrics to university rankings, articles appeared raising issues about the validity of the journal impact factor, the variations in citedness among disciplines and the impact of language. With the growing interest in bibliometrics and rankings, the number of articles has been increasing along with the creation of new metrics. The journal Scientometrics publishes the most articles.  Italy introduced performance based funding in 2009. Tts National research Council (CNR -  Consiglio Nazionale delle Ricerche), is now another major player in research articles on bibliometrics but it has not yet joined the global marketing game. (See Readings for a list of articles).


SCImago’s Institutions Rankings (SIR) uses metrics from Elsevier’s SCOPUS. It ranks over 4,800 organizations, categorized as higher education (universities), government, health (medical), private (companies) and others.  Rankings are worldwide, by region and by country.  Measures include output, percent of International Collaboration, Normalized Citations and the key ranking, Excellence Rate, which is a measure of the percent of highly cited papers.  SCImago states that SIR reports are not league tables and the goal is to provide policy makers and research managers with a tool to evaluate and improve their research results.  It is important to keep this in mind when looking at the rankings.

Prior to 2014, SCImago provided a PDF document which compiled a variety of metrics that provided global, regional and country ranks.  There was no way to interact with the results online.  The 2014 rankings present every metric separately. They also include the rankings from 2009 to date.

SIR Methodology

Two thousand seven hundred of the institutions in SIR are categorized as higher education, over three times the number of institutions in all the rankings so far, except Nature.  This results in many unfamiliar names.

SIR uses 9 research metrics. In order to account for size, most of the data are calculated based on percentages, with the highest percent receiving a score of 100. No underlying data are provided online.  According to an article by L Bornmann and F de Moyan-Anegón (2014) Excellence Rate “is the most important indicator for measuring an institution’s impact”.   See Table 8.1 Scimago Methodology. To find out the scores and ranks for any institution, for the years it has been included, click on the institution name.  (See Figure 8.1 SIR Institutional Profile showing rank and score over time.)

Scimago Global includes separate rankings on web visibility, which we will examine in a later article, and on innovation.  Scimago also publishes the IBER report for Spanish and Portuguese language countries in PDF format.

EXAMPLE: Excellence Rate is the percent of an institution’s scientific output in a set of the top 10% of most cited papers in their fields. For example, in a given year, 100 articles are affiliated with a little-known institution A and 30 of them, also affiliated with a leading research institution B, are highly cited.  Institution’s A ratio is 30%.  Institution B produced 5,000 articles of which 800 were highly cited, giving them a ratio of 16%.  Institution A will have a much higher excellence rate.  Which is the better institution?   We will revisit excellence after we present Leiden’s “excellence” metric

See Tables 8.2a and 8.2b for illustrations of results from the different SIR rankings in higher education. While this is not a critique of any rankings, it is important to observe some of the unexpected results in .


Leiden University’s Center for Science and Technology Studies (CWTS) uses bibliometric indicators from Thomson Reuters to measure the scientific research output of 750 major universities.  It bases its rankings on scholarly performance output measures. It uses no reputational data or data collected from the universities themselves. The researchers modify the existing data to create normalized scores and continue to experiment with new measures.  They focus on citation impact and scientific collaboration and correct for differences among scientific fields in impact and collaboration.  An April 30 2014 Press release explains this and the caveats in more detail.

Leiden Methodology

There is no overall ranking. Rankings are based on number of publications, citations and publications in the top 10% which can then be normalized by size and/or field.  The default ranking is based on the proportion of top ten publications normalized by size and field.  Users can choose their own rankings online or download the entire dataset.  See Table 8.3 Leiden Methodology for definitions and see Readings for more in-depth information. By manipulating metrics and filters, Universiti Kenbangsaan Malaysia is number one in Malaysia on the metric MNCS- mean normalized citation score.  Tsinghua is number one in the world in number of publications in math, computer science and engineering; all the top five in that field are Chinese institutions. Leiden also has several collaboration rankings, based on publications published with other institutions and other countries.


Total output for higher education institutions

The only really comparable metric in these two rankings is total output, a size dependent indicator that ranks on the base number of publications upon which other metrics are calculated.

The U.S. does not dominate the top 10 or 20 higher education total output lists for SIR although it continues to dominate in the Leiden rankings as shown in Figure 8.2   and Table 8.4 which compares world output rankings based on 2013 data, the last year for which SIR values are available.

Table 8.5  compares the number of higher education institutions in East and South Asia and the top institution in each country.

Asian Rankings on Excellence

While there is no overall weighted ranking, SIR and Leiden identify what they consider to be their key excellence indicator, a proportion of a university’s papers in the top 10% of cited papers in a field: Excellence Rate for SIR and PP (top 10%) for Leiden.

Using Leiden’s “excellence” metric, normalized papers in the top 10% (PP 10%), Nanyang Technologicial University in Singapore ranks highest in Asia at 98 worldwide and is the highest of the top universities in SIR at 138.

Table 8.6  illustrates the differences in excellence rankings among three methodologies:  SIR’s Excellence Rate and Leiden’s top 10% papers based on total number and with advanced filters for field and fractionalization. I struggled over the differences in the top rankings for SIR where the methodology allows institutions with less than 1% of the output of universities that are recognized leaders in other rankings to rise to the top.


SIR uses Scopus for everything in its ranking while Leiden uses Web of Science for these rankings.  Ruth’s Rankings 4 discusses the differences between the two data sources.  SIR is more inclusive, containing six times more institutions overall. In the excitement of having a benchmarking cohort for your institution, do not forget the importance of understanding and sometimes questioning the metrics as shown in the SIR excellence results where Harvard is ranked 156 for Excellence and Kyoto University of Education is ranked number 1.  As I discovered in reviewing Thomson Reuters Incites (which I will introduce in a later article) the smaller institutions can distort the rankings (Inciter’s Benchmarking and Analytics Capabilities (2015), Online Searcher Vol 39, No 1)

Who is number 1?

University of Tokyo – one for output in Asia (Both)

Kyoto University of Education – one in Asia for “Excellence” – (SIR)

Tsinghua University – one in the world for Math, Computer Science and Engineering specialization (Leiden)

Universiti Kebangsaan Malaysia –one in Malaysia for Mean Normalized Citation Scores (Leiden)

Chongqing University of Arts and Sciences – one in Asia for Normalized Impact (SIR)

Thanks to Ludo Waltman from CWTS for answering all of my questions and providing background reading. 

Ruth’s Rankings

  1. Introduction: Unwinding the Web of International Research Rankings
  2. A Brief History of Rankings and Higher Education Policy
  3. Bibliometrics: What We Count and How We Count
  4. The Big Two: Thomson Reuters and Scopus
  5. Comparing Times Higher Education (THE) and QS Rankings
  6. Scholarly Rankings from the Asian Perspective 
  7. Asian Institutions Grow in Nature
  8. Something for Everyone
  9. Expanding the Measurement of Science: From Citations to Web Visibility to Tweets
  10. Do-It-Yourself Rankings with InCites 
  11. U S News & World Report Goes Global
  12. U-Multirank: Is it for “U”?
  13. A Look Back Before We Move Forward
  14. SciVal – Elsevier’s research intelligence –  Mastering your metrics
  15. Analyzing 2015-2016 Updated Rankings and Introducing New Metrics
  16. The much maligned Journal Impact Factor
  17. Wikipedia and Google Scholar as Sources for University Rankings – Influence and popularity and open bibliometrics
  18. Rankings from Down Under – Australia and New Zealand
  19. Rankings from Down Under Part 2: Drilling Down to Australian and New Zealand Subject Categories
  20. World Class Universities and the New Flagship University: Reaching for the Rankings or Remodeling for Relevance
  21. Flagship Universities in Asia: From Bibliometrics to Econometrics and Social Indicators
  22. Indian University Rankings – The Good the Bad and the Inconsistent
  23. Are Global Higher Education Rankings Flawed or Misunderstood?  A Personal Critique
  24. Malaysia Higher Education – “Soaring Upward” or Not?
  25. THE Young University Rankings 2017 – Generational rankings and tips for success
  26. March Madness –The rankings of U.S universities and their sports
  27. Reputation, Rankings and Reality: Times Higher Education rolls out 2017 Reputation Rankings

*Ruth A .Pagell is currently an adjunct faculty [teaching] in the Library and Information Science Program at the University of Hawaii.   Before joining UH, she was the founding librarian of the Li Ka Shing Library at Singapore Management University.  She has written and spoken extensively on various aspects of librarianship, including contributing articles to ACCESS.