By Ruth A. Pagell*
(11 June 2015) The existing rankings that we examined in articles 5 to 11 are not designed for the mass market of undergraduate students. Even THE and QS, which include a broader range of metrics, and Webometrics and Altmetrics, which focus on alternative measures, are still primarily tools for policy makers and researchers at all levels of higher education. (See the list at the end of this article for individual ranking articles).
This article examines U-Multirank in a similar manner to the other rankings and you can be the judge if it is right for you and your users. We will present overviews of all the modules but for comparisons, we will focus on its research components.
BACKGROUND
We introduced U-Multirank in Ruth’s Rankings 2 in our discussion of policy. Concerned with the proliferation of global university research rankings, the OECD commissioned a feasibility study (AHELO) Assessment of Higher Education Learning Outcomes in 2008.The European Commission, Directorate General for Education and Culture followed up with a tender to “look into the feasibility of making a multi-dimensional ranking of universities in Europe, and possibly the rest of the world too” (European Commission, 2008). U-Multirank, which launched in 2014, is the outcome of the tender (van Vught & Ziegele, 2011).
Three organizations are primary contributors to U-Multirank: CHE, Centre for Higher Education (Germany) which does national rankings, CHEPS, Centre for Higher Education Policy Studies (Netherlands), CWTS, Centre for Science and Technology Studies at Leiden University (NL) the provider or the Leiden Rankings, all working together under the EU funded Consortium for Higher Education and Research Performance Assessment (CHERPA).
The underlying principles for establishing U-Multirank are:
Muti-dimensional – covers not only research but teaching & learning, innovation, internationalization and regional outreach.
Independent – separated from public authorities or individual universities.
Transparent – provides users with a clear understanding of all the indicators and allows customization based on needs
Global – includes universities not only in Europe but worldwide, especially US, Asia and Australia.
I was challenged in my description of Incites in Ruth’s Rankings 10, because of the complexity and interrelationships of the metrics. I assumed that U-Multirank, based on its principle of transparency and its general target audience, would provide simplified indicators, scoring and presentations. Reading the definitions used for the indicators in the research scores in U-Multirank, trying to efficiently navigate and figuring out how to decode the scoring and graphics, makes InCites look easy!
ORGANIZATION and METHODOLOGY
U-Multirank’s dataset includes over 1,200 institutions from 83 countries. Bibliometric data are provided by CWTS and the rest of the data are from institutional and student surveys. All the universities in the Leiden rankings are part of the dataset. About 600 institutions submitted questionnaires of which about 200 were from the Leiden group. I emailed U-Multirank asking about how the others were selected and received no answer to that question.
Figure 12.1 shows the distribution among continents for U-Multirank compared to a variety of different rankings. European institutions are the highest percent in all the rankings. U-Multirank is the only ranking where they are more than half the dataset and the only rankings where universities from the Americas make up less than 20% of the dataset.
U-Multirank has four different interface options and five different dimensions, each with multiple metrics. The interface options on the home page include:
For Students – what to study;
Compare – comparing like with like or benchmarking a known university;
At a Glance – institutional level metrics for a known university;
Readymade – rankings for teaching & learning, research & research linkages, knowledge transfer, international orientation and regional engagement.
Which indicators you see depend on the interaction of interface options and the dimensions.
U-Multirank Data
Most of the metrics in U-Multirank are not bibliometrics. Institutions voluntarily supply the data for other metrics. Many universities have chosen not to participate. There is also a student survey applied to output for fields. Some institutional level data comes from the IAU dataset (Institute for Higher Education).
Research metrics are from Web of Science (WOS) and PATSTATs and are provided by CWTS, producers of the Leiden rankings and the International Centre for Research on Entrepreneurship, Technology & Innovation Management at Catholic University, Leuven Belgium Many of the bibliometrics are size normalized, which means they are based on percentages so smaller universities may rise to the top.
Table 12.1 U-Multirank Methodology provides an overview of the indicators and options. For an in-depth explanation of each indicator, check the Indicator Book which is arranged by institutional rankings and field based rankings. Each Book entry includes, definition, rationale, source, time period and formula if it is calculated.
U-Multirank uses the same five dimensions at an institutional level and for seven different subjects and eight broad fields of study. The individual indicators that appear in the default tables differ according to choice of option. Initial sorting is done on the first indicator. See Table 12.2 for a list of metrics by dimension for the four options.
U-Multrank Options
The two modules that are of most interest to us are “At a Glance” and Readymade. The former has a set of institutional level metrics for each university and shows where the university ranks relative to the dataset. The latter is U-Multirank’s attempt at presenting league tables: However, there are aspects of For Students and Compare that interact with Readymade.
For students includes two approaches, one is by seven selected subject areas and the second is “Compare”, Compare institutions as a whole or on one of eight broad fields of study. Both emphasize metrics in teaching & learning. For more information about For Students, see Example 12.1 .
What is important to remember about For Students? That the number of universities is limited and they are not based on quality of the program.
Compare has two approaches – one of the seven selected subjects or selecting a known university. See Example 12.2 for more information about Compare.
What is important to know about Compare? The structure and limitations of this option. If you select to customize Readymade, it defaults to Compare.
Readymade for Research and Research Linkages compares 1,069 PhD award granting institutions. Although there are eight total research indicators available, Research & Research Linkages uses only four and adds three joint publication indicators. See Table 12.3 for Readymade indicators The initial sort is by citation rate. CWTS, the organization that prepares the data, recommends using Top 10% (email, L. Waltman, 27/5/15).
For each Readymade option, you can change the sort order. You can further personalize your chart by adding or removing indicators, limiting locations and changing institutional demographics (from the COMPARE dimension).
To demonstrate how Readymade works, see Example 12.3 Readymade-economic involvement which uses the Economic Involvement option. Scores are presented graphically and then can be translated into grades.
At a Glance profiles each university included in U-Multirank with an actual score, where data are available. See Example 12.4 for an explanation of what At a Glance presents.
The way to see what is really going on in U Multirank is first to understand the source and scope of the data, customize your ranking and then go into At a Glance for the scores.
ASIAN RANKINGS
After a lot of playing, I discovered that I could modify the Readymade rankings by limiting to Asia or changing the metrics. Forget the bell shaped curve. While Asian universities did not fare well on average number of cites per document, just about half received a score of A for publications adjusted for number of students. See Example 12.5 which has a spreadsheet of the top universities in Asia on five bibliometrics and also has a screen shot of the top ten ranked on joint international publications. Our familiar list of top universities only appears under absolute number of publications. Hong Kong has seven of the top 10 on Average Cites per publication. Singapore’s NUS appears on four of five lists and NTU on three of the five.
CONCLUSION
We need to keep in mind that U-Multirank is an EU initiative and despite the goal of a global tool, the current iteration is still very much tailored to the European market. According to a German university administrator, “There the comparison to universities with similar structures in various dimensions is very helpful.”
Its uses for the Asian or Americas markets are limited.
If you are interested in global research rankings, then there are many other sources, including Leiden’s own rankings. If you are looking from a student’s perspective or are interested in the international orientation of a university you can use THE or QS. Granted, these are all limited in their coverage, but there is a level of quality control.
We are still searching for the best approach. When we use absolute numbers, the large well-know universities rise to the top. If we use size independent data, without imposing minimums, then small outliers can rise to the top. If we use only bibliometrics, then we only get the top research universities. If we use institutional questionnaires and student surveys, they we only get data from those who choose to submit the instruments.
An underlying objective of the U-Multirank project is to move away from the one size fits all rankings that are primarily based on research. Most institutions of higher education are not research universities nor should they be. Identifying these institutions beyond national borders for potential students, faculty and funders is more important than yet another research ranking. Based on U-Multirank in iteration two, we are not even close to achieving this goal.
European Commission (11 December 2008) “Ranking Europe’s universities”, EU/Rapid Press Release, Brussels, http://europa.eu/rapid/press-release_IP-08-1942_en.htm
Boulton (2011). University rankings: diversity, excellence and the European initiative. Procedia Social and Behavioral Sciences. 13(2011) 74-82
van Vught, F. & Ziegele, F eds. (2011). U-Multirank: Design and testing the feasibility of a multi-dimensional global university rankings published by CHERPA network, 183 pgs, Retrieved 10 May 2015 from http://ec.europa.eu/education/higher-education/doc/multirank_en.pdf
Ruth’s Rankings
A list of Ruth’s Rankings and News Updates is here.
*Ruth A. Pagell is currently an adjunct faculty [teaching] in the Library and Information Science Program at the University of Hawaii. Before joining UH, she was the founding librarian of the Li Ka Shing Library at Singapore Management University. She has written and spoken extensively on various aspects of librarianship, including contributing articles to ACCESS https://orcid.org/0000-0003-3238-9674