By Ruth A. Pagell*
(12 September 2015) Master Chef is coming to Asian TV in September (2105) but if cooking is not your passion then open up the guidebooks provided by Elsevier and learn how to cook up your own world standard metrics.
Articles in e-Access and Online Searcher examined Thomson- Reuters’ subscription analytical platform Incites in depth, both as a tool for institutional level performance (Pagell, Online Searcher 2014, 2015) and as a tool that can be used to create your own rankings (Ruth’s Rankings 10).
Ruth’s Rankings 4 introduced Elsevier’s Scopus. Elsevier also has a suite of tools, based on Scopus, called Research Intelligence Solutions. The solutions include subscription products such as SciVal, Elsevier’s equivalent of InCites, PURE, an individualized institutional tool and customized analysis options.
This article focuses on the purpose, process and performance measures in SciVal and introduces Snowball metrics and their “recipes”. Two thirds of use of SciVal is from universities’ Vice Presidents of Research or Offices of Research Development with about one third from libraries. Prema Arasu, CEO and Vice Provost at Kansas State University’s new Olathe campus, recommends that libraries become more involved in institutional metrics research (Arasu, 2014).
SciVal is designed for use to:
- Evaluate performance by a national body or institution
- Demonstrate excellence – chose a metric to showcase
- Model scenarios
I started by examining SciVal for its relevance in research rankings and discovered that the most important contribution for most of you is not the data but the open access guidebooks: SciVal Metrics Guidebook (Colledge, 2014), SciVal User Guide (2015) and Snowball Metrics Recipe Book (Colledge, 2014). These are useful to everyone interested in using bibliometrics for evaluation and analysis no matter what data or ranking sources they use. Therefore, I am focusing on the Metrics Guidebook and its companion Snowball Recipe Book before looking at the platform.
SCIVAL METRICS
SciVal Metrics are defined and explained in detail in the SciVal Metrics Guidebook. Some are highlighted as “Snowball Metrics”, explained below. For each metric, the Guidebook provides the following sections:
- Definition of the metric;
- “This metric is useful for”;
- “This metric should be used with care when”;
- “Useful partner metrics” ( SciVal recommends you use more than one metric -see p. 36, Table 4, Metrics Guidebook) and;
- Examples
Previous articles discussed factors affecting metrics beside performance output. The Metrics Guidebook identifies these and explains how using partner metrics are designed to overcome some of these factors. The factors include:
- Size – size dependent/ independent, i.e. total publications or publications per author
- Discipline – Neuroscience vs arts and humanities; frequency of publication, length of reference lists and number of co-authors are factors contributing to the differences
- Publication type – review articles tend to be cited most frequently ( see Fig. 2 p. 31 Metrics Guidebook)
- Database coverage – geographical – The US and UK publish most of the scholarly journals in Scopus and Web of Science.; international publication coverage is weakest in history, literature and culture which are localized. Fewer than 30% of scholarly publications in arts and humanities and social sciences are included in Scopus
- Manipulation – ways of increasing counts such as including or excluding self-citations or; combining research units to enhance output
- Time – citation impact and h-index; Thomson-Reuters Journal Citation Reports illustrates the impact of time in its journal metrics such as the metric cited half-life. If it takes articles in a humanities journal an average of seven years to be cited, but the rankings may only include the current five years, then scores of course will be lower than a discipline with a half-life of five years or less.
Incorporated in SciVal metrics are Snowball metrics. Agreed upon by research intensive institutions, “A Snowball Metric is one which has been defined and agreed by research focused universities as being useful in supporting strategic planning by enabling benchmarking between institutions.“ (Colledge, June 2014 p. 32). Snowball metrics are data source and system agnostic, meaning that they are not tied to any particular data or tools. Working groups from the UK, US and Australia/New Zealand have been involved in the creation of the metrics. For a list of institutions, see the Recipe Book, p. 112.
Snowball metrics include not only Scopus output but also Web of Science and Book Citation Index, Google Scholar, Worldcat.org and CRIS, Current Institutional Output Repository.
The Snowball metrics recipe book describes additional Snowball metrics. (Colledge, 2014). Table 14.1 presents SciVal metrics with notes on their applications. Snowball metrics are in bold.
SCIVAL PLATFORM
The SciVal subscription platform includes four analytical modules: Overview, Benchmarking, Collaboration and Trends plus MySciVal, which is an index to the content in the other modules. There are pre-made templates for each module. Users must select at least one entity from categories labeled Institutions and Groups, Researchers and Groups, Publication Sets, Countries and Groups or Research Areas to get started. The modules have different entity choices for selection and display. See Table 14.2 for the structure of the SciVal modules. Figure 14.1 illustrates the navigation of the platform using the Overview module.
Overview is the basic module presenting metrics for scholarly research for each category of entities. Table 14.3 identifies the metrics in each of the Overview modules for each entity choice
The majority of institutions are universities and the following organization types included:
Academic: university, college, medical school, research institute; Corporate: corporation or law firm; Medical: hospital; Government: government & military organization; Other: non-governmental organization.
SciVal uses algorithms to disambiguate author and institutional affiliation names. “Researchers” include manual refinement in addition to the algorithm. Problems with common names, especially when affiliation is not clear, are similar to those in Incites (Pagell, 2015).
SciVal uses whole not fractionalized counting for articles with multiple authors. When counting co-authored publications for institutions, researchers, publications sets and countries, a publication is counted as a whole for each entity. A publication belongs to an institution or country based on authors’ affiliations when a paper was written. In Researcher and Publication Sets the publications belong to the researchers regardless of their past or current affiliations.
Example: An article has four authors – two internal, one from another institution in the same country and one from a different county. The entire article is counted once as international, twice for the two countries, three times for the three different institutions and four times for each author. Journal articles are assigned a discipline based on the entire journal. For Trends, it is assumed that not all articles in a journal are the same discipline so a research area is defined at the article level.
Overview is the only module providing a list of countries and institutions within pre-formatted groups, such as Asia-Pacific, South America, Middle East or less familiar groupings such as CIVETS (Columbia, Iran, Egypt, Turkey and South Africa). Overview answers questions such as:In the pre-set Asia Pacific regions, what countries have the highest output? (China and India) Institutions in a research area? (Tsinghau and Chinese Academy of Sciences) based in Computer Science. Who are the top authors in Malaysia in Agricultural and Biological Sciences?
These results can be downloaded in a spreadsheet.
Unique to SciVal in Overview is “Competencies”, based on keywords in publications. It is designed to help an institution or country identify its research strengths. (See SciVal Users Guide, Section 3.5 p. 21) Generate a table of competencies, export the table and display the competencies in a ring or a matrix. You need the table to interpret the visuals. The matrix is an adaptation of Boston Consulting Group’s (BCG) Matrix with Competency Growth on the Y axis and Relative Publication Share on the X axis. See Figure 14.2 for an example of a competency ring.
The Benchmarking module includes fifteen metrics. However, only three can be displayed or downloaded at one time. Selecting Asia Pacific in Benchmarking displays the total metrics for the region. To see individual institutions, I have to input each one separately. Table 14.4 is the download from a Benchmarking comparison performed on institutions in a research area. Figure 14.3 displays a comparison of an institution, researcher and research area in one search.
Collaboration is determined at an article level and collaboration types are labelled individual, institutional, national and international. Only entities from institutions and countries can be represented in Collaboration but the other modules provide data on collaborations.
Trends, launched in February 2015, tracks research areas, both SciVal’s preset areas and any topic you search. It introduces a whole new layer of metrics, usage data, derived from Scopus and Science Direct, and is accompanied by its own Usage Guidebook. (2015). See Figure 14.4 for an example of using Trends.
RANKINGS
No Ruth’s Rankings is complete without some ranking. See Table 14.5 for comparative rankings using SciVal’s Field Weighted Citation Impact (FWCI) and impact rankings from THE Asia and Scimago Institutions Rankings (with special permission from Scimago).
STRENGTHS AND LIMITATIONS OF SCIVAL AS A RANKINGS TOOL
SciVal is more of a tool that explains an institution’s rankings rather than a tool to use for ranking. SciVal’s strengths lie in the unique metrics for evaluating individual institutions or countries, their Collaboration details and the capabilities of the new Trends module for research areas. The detailed documentations provided in their guidebooks are a must for all interested in bibliometrics.
SciVal provides the raw data and does not score or rank. Therefore, its application for research rankings is up to individual users. Choosing a pre-selected country group entity in Overview Countries generates a list of individual countries or institutions. However, the comparison is only for five metrics of which three can be downloaded at a time. These are total number of publications, total number of authors and one of three citation options: total citations, citations per publication, and field weighted citation impact. Except for output and total citations, they do not map well with our current rankings sources. Benchmarking requires manually entering your entities as the country groups are not disaggregated. Fifteen metrics are available and only three can be displayed at one time.
OTHER PIECES
PURE is for individual institutional profiles. PURE aggregates an individual organization’s research information from numerous internal and external sources starting with Scopus and interfacing with WOS, repositories; and file attachments. An off-shoot of PURE is Experts from 160 institutions, some of which are available on the web: https://www.elsevier.com/solutions/pure/who-uses-pure/clients.
Since most rankings are using some citation metrics, in our next Ruth’s Rankings we will explore citations and citation counting.
Mahalo to Shereen Hanafi, Head of Marketing Research Management, at Elsevier for her support and also to Scimago for giving me research access of SIR.
REFERENCES
Arasu, P. (December 2014). A role for the library in awakening to the power and potential of institutional metrics for research. Accessed September 2, 2015 at http://www.snowballmetrics.com/wp-content/uploads/LCN_Arasu_Dec-8-2014.pdf
Colledge, L. (June 2014). Snowball Metrics Recipe Book, 2nd ed. Accessed August 30, 2015. http://www.snowballmetrics.com/wp-content/uploads/snowball-metrics-recipe-book-upd.pdf
Colledge, L. (February, 2014). SciVal Metrics Guidebook, version1.01. Accessed August 30 2015 at http://www.elsevier.com/__data/assets/pdf_file/0020/53327/scival-metrics-guidebook-v1_01-february2014.pdf
Pagell, R. A. (2015). InCites’ Benchmarking and Analytics. Online Searcher, Vol. 39, No.1, pp. 16-21. https://www.researchgate.net/publication/281350558_InCites’_Benchmarking_and_Analytics
Pagell, R.A. (2014). Insights into InCites. Online Searcher, Vol. 38, No. 6, p. 16. https://www.researchgate.net/publication/281350489_Insights_into_InCites_Journal_Citation_Reports_and_Essential_Science_Indicators
Research information management: Developing tools to inform the management of research and translating existing good practice (2010) conducted by Imperial College London and Elsevier and funded by JISC (Joint Information Systems Committee). Accessed August 30, 2015 at http://www.snowballmetrics.com/wp-content/uploads/research-information-management1.pdf
SciVal User Guide (June 30 2015). Accessed at http://www.op.mahidol.ac.th/orra/SciVal/SciVal_USER_GUIDE.pdf.
Usage Guidebook (March 2015). Usage Guidebook, v. 1.01. Accessed September 4, 2015 http://www.elsevier.com/__data/assets/pdf_file/0007/53494/ERI-Usage-Guidebook-1.01-March-2015.pdf
Ruth’s Rankings
A list of Ruth’s Rankings and News Updates is here.
*Ruth A. Pagell is currently an adjunct faculty [teaching] in the Library and Information Science Program at the University of Hawaii. Before joining UH, she was the founding librarian of the Li Ka Shing Library at Singapore Management University. She has written and spoken extensively on various aspects of librarianship, including contributing articles to ACCESS. https://orcid.org/0000-0003-3238-9674