Ruth’s Rankings 9: Expanding the Measurement of Science: From Citations to Web Visibility to Tweets

By Ruth A. Pagell*

(31 March 2015) “…there is a growing movement within the scientific establishment to better measure and reward all the different ways that people contribute to the messy and complex process of scientific progress.”  Arbesman (2012), Wired.

The rankings we examined in Ruth’s Rankings 5, 6, 7 and 8 are grounded in part or in full on the traditional bibliometrics of output and citations in scholarly publications. Do these metrics fully capture researchers’ output and their institutions’ impact today?

This article presents three sets of metrics: web visibility as part of SIR Institutions Rankings (SIR), webometrics in Ranking of World Universities and alternative metrics in  Webometrics and altmetrics provide alternative and complementary measures of scholarly activities with the 21st century emphasis on social impact and attention.  See Article 9 Readings for further details.

The new funding policy of the U.S. National Science Foundation illustrates this change in philosophy. In an article in Nature, Piwowar (2013) notes that NSF changed the word ‘publications’ to ‘research products.’  She  says “This means that, according to the NSF, a scientist’s worth is not dependent solely on publications. Data sets, software and other non-traditional research products will count too.”  New tools are measuring the impact of these non-traditional research products and methods.


Scimago Institutions Rankings 

A module of Scimago Institutions Rankings (SIR) that we did not cover in Ruth’s Rankings 8 is “Web Visibility” which uses the same platform and institutions as SIR. There are two metrics, Website Size and Domain’s Inbound Links.  See Table 9.1 SIR Web Visibility Rankings Methodology.   Both of these metrics are size dependent.  The more web pages an institution has, the higher its ranking on web output.  Table 9.2a and 9.2b compare the two Web Visibility metrics with the SIR metric Publication Output for the world and for Asia (excluding the Middle East).  We can see in Table 9.2a that only Harvard of the top ten worldwide universities in publication output appears in the top ten in website size.  In Table 9.2b, Asian Universities, Seoul National University appears in the top 10 on all three lists and Shanghai Jiao Tong is top ten in both publication output and website size.  Institutions from six different Asian countries appear on the list, including Malaysia and Thailand, while Japan is notably absent from the top ten in both Web Visibility metrics.

Ranking Web of Universities

Ranking Web of Universities ( another product from the umbrella group that provides SIR has been published twice a year since 2004.  As of January 2015, it includes over 23,000 institutions of higher education from top world universities such as Harvard to diploma-granting academies, for example, Pacific Flying School of Fiji.  See Table 9.3 for the methodology for these rankings. Webometrics has a composite score (WR) based on visibility or impact and activity, that includes presence, openness (open access) and excellence.  Many institutions do not have a real score for excellence since this data is only available for the 2,700 institutions in  SIR and these institutions are generally lower in impact as well.

There is no way to limit by type of institution or size although searching by geographic area is flexible.  This does not measure website design or visits or visitors.

Ranking of World University Rankings

I expected to find results that differed from other composite rankings. First, I compared the rankings across the different Webometric indicators for both the world and for Asia (excluding the Middle East) and then I compared the top institutions in Webometrics with our other research rankings.  Table 9.4 compares the Webometrics composite (WR) top ten institutions with their component measures. In Table 9.4a all of the top ten in the world for Webometrics were from the US. Eight of them appeared on a top graduate school ranking based on peer review ninety years ago! (Magoun, 1966).  Nine of the ten were in the top for impact while only one, Harvard, was top ten for openness.  In fact, only four out of ten were in the top 100 on openness.  National Taiwan University was top overall in Asia and only Xiamen University had not appeared on other lists.  Two were in the top ten for openness and four in the top 100. If I were a statistician, I would be interested in the relationship between the WR rank and openness.

Webometric ranking results are similar to other bibliometric rankings (Aguilo 2008, 2010).  “If the web performance of an institution is below the expected position, according to their academic excellence, university authorities should consider their web policy, promoting substantial increases of the volume and quality of their electronic publications”. (Methodology, objectives).

Webometrics also include separate rankings for repositories, including portals, hospitals and research centers (3 Asian are in the top 10).  There are over 2,100 repositories, with over 500 from Asia.


The underlying impact measure of the number of citations for a peer-reviewed article in a peer-reviewed journal was the 20th century gold standard. In the 21st century, scholars are exploring altmetrics to measure “attention” through social media, including Twitter and Facebook, blogs, news and open repositories. These measures are complementary to standard bibliometrics.  There is no world ranking based on altmetrics.  What there is, however, is a growing change in thinking about scholarly output, starting with the Altmetrics manifesto (2010).

The provider of the metrics used by services such as Scopus is, which began tracking activity around research outputs (articles, datasets, software, books) in late 2011.

Table 9.5  explains the methodology, including content and weightings for  Tracking begins at the scholarly article level and is referred to as “online attention”.

Any individual can download a “Bookmarklet” application with which you can track any article from PubMed, arXiv or pages containing a DOI.  Most publishers make their DOIs available on their websites. It only supports publishers who embed Google Scholar friendly citation metadata on their pages by default (Example 9.1).  Note that it covers a limited number of open access scholarly repositories.  Missing is ResearchGate, which according to is the largest research portal. In order for portals to be included requires open access and a willingness to cooperate. does have a trial with ORCID, the online registry of unique identifiers.  I entered my ORCID ID and my list of articles appears, but most are too old or too new to have viable scores. (Example 9.2 ).  The advantage of using ORCID is that it is an official registry of author names, as supplied by the author.

Explorer is a fee-based service for institutions that allows the institution to see its profile and compare it to others who are subscribers.  A sample login is available on request for an imaginary university.   Example 9.3 displays the raw data for the different measures. There are also a series of filters, including a list of journals and publishers covered by

There is an API available for publishers to integrate the metrics into their articles.

I want to thank Euan Adie, founder of for the help he provided me in navigating and interpreting


My role in this column is to present you with alternative methods to evaluate and rank scholarship. Which, if any, of these metrics you chose to use is based on your individual institution and its values. Traditional measurements of scholarship, based on journal publications and citations from sources such as Thomson Reuters and Scopus are not capturing the new dissemination methods for scholarly research on the web and in social media. The sampling of Readings shows that scholars and funders are looking at alternative measurements.  Even the premier scholarly journal Nature, has been discussing the topic over the past two years.

We are experiencing a sea-change in scholarly communication. With the change in the ways and speed of dissemination of scholarly materials there is a growing  recognition among scholars and funders that current measures do not fully capture the impact of an individual author, article or institution.  The measures presented here are complementary to existing methodologies. They have obvious disadvantages, based on our scholarly criteria, which emphasize not only size but also quality. The reality is that we need new measurements.  It may not be tweets or inbound links or even existing journal impact factors.  However, our global rankings five years from now will be different.  Tell me what you think by emailing me at

Next month I will write about the other end of the spectrum, Incites, a proprietary product from Thomson- Reuters that allows institutions to do their own benchmarking; and we will go back over the previous rankings to see where underlying data are available.

Ruth’s Rankings

  1. Introduction: Unwinding the Web of International Research Rankings
  2. A Brief History of Rankings and Higher Education Policy
  3. Bibliometrics: What We Count and How We Count
  4. The Big Two: Thomson Reuters and Scopus
  5. Comparing Times Higher Education (THE) and QS Rankings
  6. Scholarly Rankings from the Asian Perspective 
  7. Asian Institutions Grow in Nature
  8. Something for Everyone
  9. Expanding the Measurement of Science: From Citations to Web Visibility to Tweets
  10. Do-It-Yourself Rankings with InCites 
  11. U S News & World Report Goes Global
  12. U-Multirank: Is it for “U”?
  13. A look back before we move forward
  14. SciVal – Elsevier’s research intelligence –  Mastering your metrics
  15. Analyzing 2015-2016 Updated Rankings and Introducing New Metrics
  16. The much maligned Journal Impact Factor
  17. Wikipedia and Google Scholar as Sources for University Rankings – Influence and popularity and open bibliometrics
  18. Rankings from Down Under – Australia and New Zealand
  19. Rankings from Down Under Part 2: Drilling Down to Australian and New Zealand Subject Categories
  20. World Class Universities and the New Flagship University: Reaching for the Rankings or Remodeling for Relevance
  21. Flagship Universities in Asia: From Bibliometrics to Econometrics and Social Indicators
  22. Indian University Rankings – The Good the Bad and the Inconsistent
  23. Are Global Higher Education Rankings Flawed or Misunderstood?  A Personal Critique
  24. Malaysia Higher Education – “Soaring Upward” or Not?
  25. THE Young University Rankings 2017 – Generational rankings and tips for success
  26. March Madness –The rankings of U.S universities and their sports 
  27. Reputation, Rankings and Reality: Times Higher Education rolls out 2017 Reputation Rankings

*Ruth A .Pagell is currently an adjunct faculty [teaching] in the Library and Information Science Program at the University of Hawaii.   Before joining UH, she was the founding librarian of the Li Ka Shing Library at Singapore Management University.  She has written and spoken extensively on various aspects of librarianship, including contributing articles to ACCESS.