- Over 30% of the top 100 universities in publication output are Asian
- Zhejiang University has the most publications in Asia and third most in the world
- Nanyang Technological University (NTU) has the highest proportion of top 10% publications in their field in Asia
- For purposes of this summary Asia excludes the Middle East
(26 May 2017) Ruth’s Rankings 8 introduced the CWTS (Centre for Science and Technology Studies) Leiden ranking. Not as well-known as ARWU, THE or QS, it uses enhanced data from the articles indexed in the Web of Science core collection, rather than calculating scores. The interface displays three indicators (See Figure 1). P is the size-dependent total number of articles, based on fractional counting. Also displayed are these two indicators of scientific impact , “P(top 10%) and PP(top 10%), the number [size-dependent] and the proportion [size-independent] of a university’s publications that, compared with other publications in the same field and in the same year, belong to the top 10% most frequently cited.”
Except for an increase of 60 universities, 40% of which are from China, and a base publication dataset of 2012-2015, the 2017 Leiden ranking interface and output look similar to the 2016 update. Nine of the top10 global and Asian leaders in publication output remain the same as 2016, with a slight change in the order. Institutions with a minimum of 1,000 articles from the Web of Science core collection, using fractional counting, are included.
Table 1 lists the world’s top 10 in publication output for 2017 with their 2016 and 2011-2012 ranking and their quality rankings of number of publications (“P (top 10%”) and proportion of publications (“PP (top 10%”) of top 10% papers. Only four universities, Harvard, Johns Hopkins, University of Toronto and University of Tokyo have been in the top ten in all three of the selected years for output. All the top ten in number and proportion of top ten publications are from English language western countries although only four are on both lists.
[pdf-embedder url=”https://librarylearningspace.com/wp-content/uploads/2017/05/Table-1-Leiden-World-Output.pdf”]
Table 2 compares Asian ranking for 2017 to 2011-2012. Seven of the top ten are the same. It also drills down further into the discrepancies between universities’ publication output and their citation impact as measured by top 10% papers in their fields. Eight of the top ten in publications are also tops in P (top 10%) but only two are top ten in PP (top 10%).
Summary of 2017 Asian universities in the top 100 in the world:
- 31 universities with 19 from mainland China, four each from Japan and South Korea, two from Singapore and one from Taiwan
- 17 of the P (top 10%) with 12 from mainland China, two each from Singapore and Japan and one from South Korea.
- Five with a proportion of top 10% publications PP (top 10%), including two each from Singapore and Hong Kong and one from China.
Table 2 includes more summary information.
Table 3 tracks changes in country performance based on number of universities in Leiden relative to the addition of 80% more universities. This is another illustration of Asia’s rise in terms of ranked universities at the expense of North America and Europe.
Leiden also includes broad subject fields. The top 15 universities in output in physical science and engineering are Asian, including 12 from China. Only NTU-Sg is in the top 50 of proportion of top 10% publications.
CWTS Leiden continues to make its entire datasets available for downloading in an excel spreadsheet: http://www.leidenranking.com/downloads
PRINCIPLES for the responsible use of University Rankings.
Ruth’s Rankings 25 Young universities highlighted 14 common mistakes made by young universities, including “be obsessed with the rankings”. Leiden includes a blog to accompany its new rankings with ten principles on the responsible use of rankings. The principles are categorized under design, interpretation and use. Many have been mentioned in other articles. Below are summaries of the principles.
Design of University Rankings
1. A general concept of university performance should not be used. RAP [Ruth Pagell] : This questions the use of the composite indicator and broad-based rankings such as THE or QS
2. A clear distinction should be made between size-dependent and size-independent indictors of university performance. RAP: Example 1: Look at Zhejiang in Table 2. It is third in the world in total output, 23rd in number of top 10 publications and 383rd for proportion of top 10% publications. Example 2: 90% of ARWU metrics are size dependent
3. Universities should be defined in a consistent way. For example, there are problems from the data sources themselves in how they handle university hospitals. RAP: For example Elsevier assigns an identifier to Fudan University and separate identifiers to Fudan’s medical facilities; WOS does not.
4. University rankings should be sufficiently transparent. The proprietary nature of the underlying data is one roadblock to transparency as are analytics used by rankers to calculate scores.
Interpretation of university rankings
5. Comparisons between universities should be made keeping in mind the difference between universities. Universities have different missions based on country and subject focus
6. Uncertainty in university rankings should be acknowledged. RAP: One point that I have emphasized is that minor fluctuations should be ignored.
7. An exclusive focus on the ranks of universities in a university ranking should be avoided; the values of the underlying indicators should be taken into account. RAP: This means looking at the scores as well as the ranks. Also, a university’s rankings may drop because of additional universities or sources added to the dataset, even if its performance has remained the same or even improved.
Use of university rankings
8. Dimension of university performance not covered by university rankings should not be overlooked. RAP: Leiden ONLY focuses on scientific performance through articles and citations from Web of Science. It does not even cover conference proceedings or books.
9. Performance criteria relevant at the university level should not automatically be assumed to have the same relevance at the department or research group level.
10. University rankings should be handled cautiously but they should not be dismissed as being completely useless.
CONCLUSION:
Despite all the caveats and concerns in the literature, the commercial rankers, especially THE and QS, continue to issue new variations and add new universities to their datasets.
RESOURCES:
Salmi, Jamil (6 April 2017). 14 common errors when you set up to create a world-class university, Global View of Tertiary Education, accessed 13 April at http://tertiaryeducation.org/ .
Waltman, L. et.al. (2012). Leiden Ranking 2011/2012: Data Collection, Indicators, and Interpretation. Journal of the American Society for Information Science and Technology, 63(12) 2419-2432. Available at https://arxiv.org/ftp/arxiv/papers/1202/1202.3941.pdf
Waltman, L., Woulters, P and van Eck (17 May 2017). Ten Principles for the responsible use of university rankings. CWTS blog, accessed 18 May 2017 at https://www.cwts.nl/blog?article=n-r2q274
A list of Ruth’s Rankings and News Updates is here.
Ruth’s Rankings News Flash! is written by Ruth A. Pagell, currently an adjunct faculty [teaching] in the Library and Information Science Program at the University of Hawaii. Before joining UH, she was the founding librarian of the Li Ka Shing Library at Singapore Management University. She has written and spoken extensively on various aspects of librarianship, including contributing articles to ACCESS – orcid.org/0000-0003-3238-9674.