By Ruth A. Pagell1
(8 August 2014) We ended the first article by presenting a series of different rankings which we did not explain. Before we drill down into them, it is important to view the different rankings in the context of the past scholarly rankings environment and the current higher education policy environment. In the 21st century, national education commissions and funding organizations have incorporated rankings into their policies and funding.
Governments and scholars have been publishing research rankings for over 100 years. The Commission of the U.S. Bureau of Education published annual reports from 1870-1890 that classified institutions. In the first half of the 20th century, rankings were small studies in scholarly publications. In 1925, Raymond Hughes published A Study of Graduate Schools of America, on behalf of the America Council of Education ACE).
The Council is still interested in rankings today. Hughes’ study rated 19 graduate departments in the U.S., primarily Ivy League private universities and the major mid-western state universities. Sixteen of his 19 universities appear today on some top 30 lists.
Studies carried out through the mid-20th century, used metrics such as publications and page counts in addition to qualitative peer review. The American Educational Research Association sponsored research rankings of American Professional Schools in the 1970s based on the judgment of deans in each field. By the 1960’s Eugene Garfield recognized that his Science Citation Index could be used as an evaluation tool and articles began to appear in the 1970’s again for a limited number of universities and disciplines, using citation metrics. These studies were by scholars and for scholars. Twentyfirst century global university rankings, from commercial sources, government agencies and research institutes have led to 21st century policy initiatives.
POLICIES AND ACCOUNTABILITY
With the globalization of the higher education industry and the impact of the 2008 recession, policy makers and funding agencies are integrating rankings into policy and decision making. Bibliographic metrics are an integral part of the new global rankings because of the growing availability of tools to measure scholarly output. Policies are designed to enhance and evaluate the quality of higher education and to distribute R & D expenditures. The earlier rankings from U.S. News and World Report and individual country– level rankings are designed to support student choice. Policy makers use the metrics from the global rankings as a way to improve a nation’s scientific and technical position on the world stage.
At a country level, governments are interested in creating and evaluating world class universities. The United Kingdom’s Higher Education Funding Council (HEFCE) is one of the better known assessment bodies as was its Research Assessment Evaluations from the 1990s. These will be replaced in 2014 by a new Research Excellence Framework. “HEFCE distributes public money for higher education to universities and colleges in England, and ensures that this money is used to deliver the greatest benefit to students and the wider public.”
At the same time the U.K. was assessing the research of its universities, Japan, China, Taiwan and South Korea began developing excellence programs to enhance research and attract top students. Asian universities are rising as a result of these government policies.
Since the beginning of the 21st century, these countries have all targeted a limited number of universities to receive special funding. For example in 1998 China introduced Project 985 mainly designed to develop 10 Chinese universities to top global ranking positions in the 21st century. An Australian policy note, Government Research Funding in 2014 in selected countries, noted that four fifths of the world’s research and development (R & D) expenditure is now found in the USA, China, Japan and the European Union. In 2014 the U.S. is spending the most of any county on R & D, followed by China, Japan, Germany, South Korea, France, U.K., India, Russia and Brazil. Spending on R & D in China is 60% of that in the USA and it is likely that China will pull ahead by 2022.
As Asian universities began focusing on creating world-class universities to join the top rankings and countries started focusing on research accountability, multi-national groups recognized a need to impose standards of accountability on the rankings themselves. A group of experienced rankers and ranking analysts from Europe and the U.S. met in 2002 to create the International Ranking Expert Group, IREG, now called the International Observatory on Rankings and Excellence. IREG, the UNESCO-European Centre for Higher Education (UNESCO- CEPES) and the U.S. based Institute for Higher Education Policy (IHEP) met in 2006 and developed the Berlin Principles for rankings and league tables, which include:
A) Purposes and Goals of Rankings
B) Designing and Weighting Indicators
C) Collection and Processing of Data
D) Presentation of Ranking Results
The guidelines aim to insure that “those producing rankings and league tables hold themselves accountable for quality in their own data collection, methodology.”
In 2008, OECD launched the Feasibility Study for the International Assessment of Higher Education Learning Outcomes (AHELO) which was designed to gauge “whether an international assessment of higher education learning outcomes that would allow comparisons among HEIs across countries is scientifically and practically feasible.“ The final results are presented in several publications (OECD, 2013) and 17 countries, representing five continents, are included in the study.
Incorporating both the Berlin Principles and the AHELO learning outcomes, the European Commission, Directorate General for Education and Culture funded the development of a new system U-Multirank, launched in 2014. This controversial new player is neither a ranking nor a measurement of world-class universities. We will examine U-Multirank in a later article.
Evidence-based bibliometrics used in global research rankings appeal to policy makers and are often questioned by academics. Our next step is to learn more about the most commonly used bibliometrics and what to look for when we examine rankers’ methodologies.
For those interested in more information about policy and the history of rankings, check the bibliographies in:
Pagell, R. A. 2009. University research Rankings: From page counting to academic accountability. Evaluation in Higher Education. Vol. 3, No. 1, pp.71-101. http://ink.library.smu.edu.sg/library_research/1 (Retrieved 25 September 2013)
Pagell, R. A. 2014. Bibliometrics and University Research Rankings Demystified for Librarians. Chen, C. and Larsen, R. (eds.) Library and Information Science: Trends and Research.(Open Access ) Bibliometrics and University Research Rankings Demystified for Librarians – Springer
European Journal of Education (March 2014) Special issue: Global university rankings. A critical
- Introduction: Unwinding the Web of International Research Rankings
- A Brief History of Rankings and Higher Education Policy
- Bibliometrics: What We Count and How We Count
- The Big Two: Thomson Reuters and Scopus
- Comparing Times Higher Education (THE) and QS Rankings
- Scholarly Rankings from the Asian Perspective
- Asian Institutions Grow in Nature
- Something for Everyone
- Expanding the Measurement of Science: From Citations to Web Visibility to Tweets
- Do-It-Yourself Rankings with InCites
- U S News & World Report Goes Global
- U-Multirank: Is it for “U”?
- A Look Back Before We Move Forward
- SciVal – Elsevier’s research intelligence – Mastering your metrics
- Analyzing 2015-2016 Updated Rankings and Introducing New Metrics
- The much maligned Journal Impact Factor
- Wikipedia and Google Scholar as Sources for University Rankings – Influence and popularity and open bibliometrics
- Rankings from Down Under – Australia and New Zealand
- Rankings from Down Under Part 2: Drilling Down to Australian and New Zealand Subject Categories
- World Class Universities and the New Flagship University: Reaching for the Rankings or Remodeling for Relevance
- Flagship Universities in Asia: From Bibliometrics to Econometrics and Social Indicators
- Indian University Rankings – The Good the Bad and the Inconsistent
- Are Global Higher Education Rankings Flawed or Misunderstood? A Personal Critique
- Malaysia Higher Education – “Soaring Upward” or Not?
- THE Young University Rankings 2017 – Generational rankings and tips for success
- March Madness –The rankings of U.S universities and their sports
- Reputation, Rankings and Reality: Times Higher Education rolls out 2017 Reputation Rankings
- Japanese Universities: Is the sun setting on Japanese higher education?
- From Bibliometrics to Geopolitics: An Overview of Global Rankings and the Geopolitics of Higher Education edited by Ellen Hazelkorn
- Hong Kong and Singapore: Is Success Sustainable?
- Road Trip to Hong Kong and Singapore – Opening new routes for collaboration between librarians and their stakeholders
- The Business of Rankings – Show me the money
- Authors: People and processes
- Authors: Part 2 – Who are you?
- Come together: May updates lead to an investigation of Collaboration
- Innovation, Automation, and Technology Part 1: From Scholarly Articles to Patents
1Ruth A. Pagell is currently teaching in the Library and Information Science Program at the University of Hawaii. Before joining UH, she was the founding librarian of the Li Ka Shing Library at Singapore Management University. She has written and spoken extensively on various aspects of librarianship, including contributing articles to ACCESS. https://orcid.org/0000-0003-3238-9674