Ruth’s Rankings 13: A Look Back Before We Move Forward

By Ruth A. Pagell*

BREAKING NEWS:  Scimago International Rankings Research has been removed from public view, including all back files.

“Ranking positions depend more on methodology than performance.” (Schmoch 2015)

(12 August 2015) As it is a year since our first column now is the time to consolidate what we have learned before moving forward with new articles that will introduce new concepts and rankings.

Ruth’s Rankings 1 included nine different rankings.  Table 13.1 updates those nine rankings and adds three more.  It includes ten organizations producing 12 different global rankings where at least a percentage of the indicators are derived from the bibliometric base of scholarly publications.  Refer to Table 13.1 for some of our examples below (URLs for all rankings are in the Appendix following the article).

During the year, output from Asian countries increased in Web of Science and Scopus, as shown in Table 13.2.  The largest increase over the past four years has been from China.  From the initial introduction to bibliometrics in Ruth’s Rankings 3, we have added concepts such as normalized and fractionalized and size dependent and size independent. We know to look at the distances between each score, not just the rank and we know to read the methodology.

Rankings are multi-dimensional. They are all dependent on the selection process for institutions and publications. The results are the interaction among the institutions that are included, the indicators that are used, the sources of the underlying data, the manipulation of the data by the ranking organization and the web interface used to present the data. This article clarifies and summarizes these multi-dimension aspects in our rankings. Following is a list of elements to consider when using any ranking.


Each ranking has its own criteria. For example, while both Times Higher Education (THE) and U.S. News Global use the same underlying data from Thomson- Reuters and use similar metrics, they have different criteria for inclusion and different weightings for their scores.  Therefore, their top universities are different and only seven of the ten are on both lists.

THE, for its 2014-2015 rankings exclude universities “if they do not teach undergraduates; if they teach only a single narrow subject; or if their research output amounted to fewer than 1,000 articles between 2008 and 2012 (200 a year).“  Other exceptions are made for subject fields.  This results in 401 ranked world universities.

U.S. News Global Rankings uses a pool of 750 institutions. Institutions have to be among the top 200 universities in Thomson Reuters’ global reputation survey.  Next, an institution has to be among those that have published the most articles during the most recent five years, de-duplicated with the top 200 from the reputation survey.  U.S. News also includes graduate-only institutions, which are excluded from THE.


Papers include peer reviewed articles; articles and reviews; proceedings or all publications.

ARWU and National Taiwan University use Thomson-Reuters publications for their base as well.  ARWU uses “Total number of papers indexed in Science Citation Index-Expanded and Social Science Citation Index in 2012. Only publications of ‘Article’ and ‘Proceedings Paper’ types are considered. When calculating the total number of papers of an institution, a special weight of two was introduced for papers indexed in Social Science Citation Index.”

NTU rankings use “Number of Articles” drawn from Essential Science Indicators, an eleven year range.  Only five of the top ten universities are the same in these two rankings 

Institutions may be represented at a system level (U California System), by individual university (UCLA), or individual publishing unit (University of Massachusetts Medical School).  Rankings may include only universities or all institutions that meet publication criteria.  Even what is a “university” is not consistent since some rankers include the high producing Chinese Academy of Sciences.

Authors are not ranked. Connecting an article to the correct institution is calculated at an article level which is often difficult with common names.  Both Thomson-Reuters and Scopus have methods to disambiguate authors’ names but both have their limitations. 


The methodologies presented for the different rankings in previous articles introduce over forty different indicators used by ranking systems, some of which are derivatives of others.  For example, for the indicator citations, we have total number of citations, citations per number of authors, citations per number of articles, and number of articles cited.

Size is an important contributor to rankings. Larger institutions generally have more output and therefore potentially more citations. Using a total count of publications or citations is referred to as size dependent.  Calculating articles per faculty or average citation per document is referred to as size independent.

Article/citation count – In the early days of citation counting only the first author was included.  Today’s scholarly environment encourages collaboration among authors from different institutions and countries.  For each article, each institution and country counts once or the credit is divided among the authors, referred to as fractional counting.

Metrics have different weightings. In addition to their weightings, they may be adjusted for fields of study, important with citations, adjusted by region or adjusted for size. In Leiden and SIR, for example, there is no weighting and each metric is rated on its own.

Quality – The range of choices are from peer reviewed articles to all papers; all citations or highly cited by top 1%, 10% or 50%.

Language affects citations. Research shows that non-English language publications on average receive fewer citations than English language publications (Van Leeuwen et al. (2001) and Van Raan et al. (2011)).

Years of Data vary among the rankings from  current year (minus 1); 5 years or up to 11 years (from Thomson-Reuters Essential Science Indicators);   Other  factors  in analyzing dates are update frequency, usually annually and date of release.  The rankings in Table 13.1 were released between August 2014 and June 2015. 


Sources for publications range from thousands in Thomson – Reuters and Scopus to Nature’s own publications for the Nature Asia-Pacific Index.  Referring to Table 13.1.B, the Nature Asia index has seven of the top ten from the NTU and SIR lists.

Sources for citation counts and counting of citations are controversial issues that will be discussed in more depth in a future article. For those who cannot wait, read A Review of literature on citation impact indicators (Waltman 2015). 

  • Thomson-Reuters Journal Citations Reports (JCR) – subscription required;
  • Free citation counts affiliated with Scopus:
  •           SCImago Journal and Country Rank (SJR);
  •           Source Normalized Impact per paper (SNIP) and Impact per Publication (IPP)
  •                     From CWTS  

Webometrics is currently looking at using Google Scholar as a source for citations


The different rankings can be categorized in a variety of ways.  Table 13.1.A groups rankings that include metrics from a variety of sources, including surveys and institutional level data while Table 13.1.B includes those with only scholarly metrics. Rankings can also be categorized by those with or without a composite score such as Leiden and SIR. They can be categorized by the type of ranking body.  THE, QS and U.S. News are  commercial sources; ARWU began as a university initiative and is now associated with a research institute at Shanghai Jiao Tong  while NTU took over rankings compiled by the Higher Education Evaluation and Accreditation Council of Taiwan, founded by the Ministry of Education and Taiwanese universities. SCImago is an independent research lab; Nature is a publisher using its own publications.

Rank may be one overall rank; rank per category; rank per region or country or rank per subject.

Scoring may be one overall composite score or scores by individual categories. The ranking methodologies describe how different scores are calculated.

Data that contribute to the rankings are rarely provided. Raw data are available from Leiden for free and Incites for a cost. Article 14 in this series introduces raw data from Elsevier in its SciVal product.

Weightings of indicators are different in each ranking. They may be percentages based on bibliometrics; weightings per indicator or indicator category.  The value of having the raw data is providing an institution with the ability to create its own rankings based on its own weightings.


The websites for the Ranking vary in their functionality and in the data that they display.

Filters help institutions benchmark by narrowing the dataset by:

  • Region and country
  • Subject areas; limited or broader categories; based on journal or individual paper (subject will be covered in a future article).  Institutions that do not make the top overall may be ranked under a specific subject.
  • Type of institution – only universities or all institutions in the dataset (SIR)
  • Number of years available
  • Number of publications or size of institution; Leiden lets you set a bottom limit but for smaller universities there is no ability to set an upper limit

Tools further enable individual customization

  • Re-rank by individual indicator; THE allows re-ranking by five metrics; U-Multirank has multiple choices, but with missing data; Thomson Reuters has multiple indicators for multiple categories for all institutions in its dataset- for a price
  • Visualization tools such as maps, pie charts and graphs
  • Downloading capabilities – entire dataset (from Leiden), by category or none

To make interpreting rankings even more interesting, the metrics that are used, the sources of the metrics and weightings are not static year on year.  For example, Leiden simplified the metrics presented on the website for “impact” in 2015.  For both years, the default is size independent with fractionalized count using the number of papers that are in the top 10% of their fields, resulting in somewhat different results as seen in Table 13.1.B.

Example 13.1 below presents Leiden number 1 for 2015 for the world and Asia (with the world ranking for the top Asian university.)  This illustrates that by using size independent metrics relatively smaller and newer institutions, such as NTU rank higher than NUS, which  gets a higher ranking for Size Dependent.


If you have read previous articles, you should know that my conclusion is always the same.  There is no one “best ranking”.  You should not panic if your institution’s or country’s ranking drops.  For example, in Table 13.1.B Thailand’s overall world rank dropped but its output continues to increase. The top universities will continue to be top universities. Users of these rankings should select a group of institutions against which to benchmark.  By examining the variety of rankings you may be able to select individual metrics from across the rankings that are meaningful for you.

Those in the Asian region may be interested in attending COLLNET 2015 in Delhi from November 26-28 where many of the papers are devoted to bibliometrics.


From Ulrich Schmoch Chapter 10 The Information Value of International university rankings: some methodological remarks, in Welpe, I.M,  Wollersheim, J, Ringethan, S and Osterloh, M, ed.  Incentives and Performance: Governance of Research Organizations, Springer 2015

Van Leeuwen, T. N., Moed, H. F., Tijssen, R. J., Visser, M. S., & Van Raan, A. F. (2001). Language biases in the coverage of the Science Citation Index and its consequences for international comparisons of national research performance. Scientometrics, 51(1), 335-346.

Van Raan, A. F., Van Leeuwen, T. N., & Visser, M. S. (2011). Severe language effect in university rankings: Particularly Germany and France are wronged in citation-based rankings. Scientometrics, 88(2), 495-498.

Waltman, Ludo (July 2015).  A review of the literature on citation impact indicators, accessed July 31 at

APPENDIX: the Rankings and their URLs

Ruth’s Rankings

  1. Introduction: Unwinding the Web of International Research Rankings
  2. A Brief History of Rankings and Higher Education Policy
  3. Bibliometrics: What We Count and How We Count
  4. The Big Two: Thomson Reuters and Scopus
  5. Comparing Times Higher Education (THE) and QS Rankings
  6. Scholarly Rankings from the Asian Perspective 
  7. Asian Institutions Grow in Nature
  8. Something for Everyone
  9. Expanding the Measurement of Science: From Citations to Web Visibility to Tweets
  10. Do-It-Yourself Rankings with InCites 
  11. U S News & World Report Goes Global
  12. U-Multirank: Is it for “U”?
  13. A look back before we move forward
  14. SciVal – Elsevier’s research intelligence –  Mastering your metrics
  15. Analyzing 2015-2016 Updated Rankings and Introducing New Metrics
  16. The much maligned Journal Impact Factor
  17. Wikipedia and Google Scholar as Sources for University Rankings – Influence and popularity and open bibliometrics
  18. Rankings from Down Under – Australia and New Zealand
  19. Rankings from Down Under Part 2: Drilling Down to Australian and New Zealand Subject Categories
  20. World Class Universities and the New Flagship University: Reaching for the Rankings or Remodeling for Relevance
  21. Flagship Universities in Asia: From Bibliometrics to Econometrics and Social Indicators
  22. Indian University Rankings – The Good the Bad and the Inconsistent
  23. Are Global Higher Education Rankings Flawed or Misunderstood?  A Personal Critique
  24. Malaysia Higher Education – “Soaring Upward” or Not?
  25. THE Young University Rankings 2017 – Generational rankings and tips for success
  26. March Madness –The rankings of U.S universities and their sports
  27. Reputation, Rankings and Reality: Times Higher Education rolls out 2017 Reputation Rankings
  28. Japanese Universities:  Is the sun setting on Japanese higher education?

*Ruth A .Pagell is currently an adjunct faculty [teaching] in the Library and Information Science Program at the University of Hawaii.   Before joining UH, she was the founding librarian of the Li Ka Shing Library at Singapore Management University.  She has written and spoken extensively on various aspects of librarianship, including contributing articles to ACCESS