“Predatory journals and publishers are entities that prioritize self-interest at the expense of scholarship and are characterized by false or misleading information, deviation from best editorial and publication practices, a lack of transparency, and/or the use of aggressive and indiscriminate solicitation practices.” (Nature, 2019)
By Ruth A. Pagell*
(22 Dec 2019) This article looks at concerns about questionable¹ publications in recently published articles.
- The appearance of questionable publications in recognized clean lists
- The proliferation of “alternative” journal indexes and journal quality metrics
When I began researching bibiometrics and university research rankings, I had a narrow view of the relevant subtopics. Publications and citations for the world’s top universities were the primary themes. As rankings have become more pervasive – or invasive – there has been a realization that the metrics used in the standard rankings do not adequately reflect the role of universities. Ruth’s Rankings’ scope has expanded to include the external environment impacting higher education including economic, social and political indicators. It encompasses the growing field within the publishing, research, library and information communities of “scholarly communications”. That includes open access publications and questionable publishers and their impact not only on quality of output but also on bibliometrics.
Questionable journal articles in respected journal lists
A recent article in Scholarly Kitchen aroused interest in this topic (Anderson, 2019). Predatory practices are covered in scholarly literature but not so much in popular scholarly communication. Anderson examined seven journals that published one of four “sting” articles and checked their citations in “legitimate” publications. The list was supposed to be secret but was easily revealed at the end of the article.
I re-ran the data using curated lists from Web of Science and Scopus. None of the journals were indexed in WOS and one was indexed in Scopus. It was discontinued when it changed publishers. It was cited in both sources. Based on Scopus data, fewer than 40% of the cited references were in open access journals. DOAJ, is the curated list including only open access publications. DOAJ eliminated many of the journals that first appeared in the listing and the same journal indexed in Scopus was indexed, dropped and re-indexed in DOAJ. See Appendix A Table 1 (in pdf) for more information.
This raises three issues for me.
1 – There is a misunderstanding among some readers between citations from questionable journals appearing in WOS or SCOPUS or journals indexed in WOS or SCOPUS. Journals are reviewed and removed if they fail to meet the vendors’ standards. When Clarivate Analytics added Publons to its WOS products, it added a list that includes both vetted and unvetted journal titles. Publons’ list falls into the questionable category. VERY careful reading of the introduction indicates which journals are vetted.
Given the size of the Web of Science and Scopus datasets, of over 40 million records each over the past 20 years, it is unreasonable to expect no questionable journals in their datasets and it is the responsibility of legitimate journal editors to check citation sources.
2 – There is a misunderstanding about the purpose of many standard library and web-based lists
- Clarivate Analytics2 and Scopus3 provide free access to their journal lists.
- WorldCat is a mega-library catalog including any item cataloged by a member library. Every one of Anderson’s journals is in WorldCat.
- Ulrichs Global Serials Directory had been the librarians’ bible for serials holdings, but it has not kept up with the times. Inclusion in Ulrichs is not a quality indicator but a record of a publication’s status as alive or dead.
- The holdings in Orcid, ResearchGate, or other author-generated datasets are what the researchers have chosen to post. Many of the records in ResearchGate end up in Google Scholar which harvests the web.
- Subscription services EBSCO and ProQuest provide free title lists for individual databases. These large database providers include many publications that are NOT scholarly publications. They also host third party databases and are not responsible for the publication lists. See Appendix A Table 2 (in pdf) for the prevalence of questionable journals in some of these sources.
3 – The third issue involves determining if there are authors, author groups, journal publishers or institutions that regularly cite questionable journals. I have not seen an article addressing this issue.
Measures of Journal Quality
I have been covering journal quality metrics and issues since 2014 with an early Ruth’s ranking (RR 4.) and an article in Online Searcher (Pagell, 2014), followed by Ruth’s Rankings 37 (Parts 1 and 2). These articles do not cover imposter impact factors, often associated with questionable journal publishers, directories or web aggregators.
Librarians may know the differences between Journal Impact Factor TM (JIF) based on Clarivate Analytics’ WOS data and the journal rankings based on Elsevier’s Scopus data, Elsevier’s CiteScoreTM, Scimago’s SJR and CWTS’ SNIP and any other sources’ posing as “impact factors”. Researchers are aware of the concept of “impact factor”. In a survey of researchers by Taylor and Francis (2019) 76% said that high impact is very or somewhat important in selecting a journal. They may not realize the Journal Impact FactorTM is owned and trademarked by Clarivate Analytics and any other publisher’s “Impact Factor” is not just alternative but an illegal use of the phrase.
According to a spokesperson at Clarivate Analytics “We have trademark protection for both the Impact Factor® and Journal Impact Factor® in our key jurisdictions, are constantly reviewing our portfolio to ensure adequate protection of our brands on a global basis.” Despite their care in monitoring usage of the term, many open access journal indexes and journal publisher sites include “impact factors”.
The legitimate JIF is a quantitative calculation based on citations from a fixed set of journals published in a set time period to articles published during a set time period. For example, the calculation could be citations in 2018 to items published in 2016 and 2017.
I intended to write a follow-up article on the proliferation of these “questionable” impact measures and discovered that research had already been done on the topic (Xia 2019, Xia & Smith 2018). Reading the articles raises another area of misunderstanding for a naïve researcher. If “Journal Impact Factor” is a trademark, and ISI is the Institute for Scientific Information, how do you write about other impact factors and another company that has taken the ISI abbreviation?
The first article by Xia and Smith provides a long list of journal quality metrics not from our familiar library providers. Many are traceable to an updated Beal’s list. The second article by Xia examines some alternative journal lists. I think that some researchers turn to these indexes because of a misunderstanding about the availability of the free lists for subscription services we highlighted above. The Xia articles are not open access. See Table 3 (in pdf), for a list of questionable journal quality metrics.
CONCLUSION
At the end of Ruth’s Rankings 37 Part 2 I recommended the website Think, Check, Submit. I still subscribe to their recommendations. However, by working my way through the limited number of journals on Anderson’s list and the auxiliary questionable resources highlighted by Xia and Smith, I realized just how difficult it is to determine if a journal, publisher or metric is legitimate.
Xia and Smith concluded that results were hard to replicate and criteria hard to find, and I did waste time trying to see what I could find and came to the same conclusion. It is also difficult to even determine if websites for these indexes and “impact factors” are still live.
WOS without Publons, SCOPUS and DOAJ should not be our concern. It is proliferation of seemingly legitimate lists that are populated by researcher entries or the lists that harvest open research datasets that should concern us.
Positive Actions: Instead of mining for questionable sites and studying blacklists, we should
1 – Lead researchers, administrators and librarians to legitimate sites. The only services that are dedicated to keeping their sources clean such as Web of Science, Elsevier, Scimago and CWTS, DOAJ.org and Cabells. Cabells has black and white lists, neither of which provide free information.
2- Emphasize the different applications among different types of lists.
3- Understand that not all questionable articles cited in legitimate articles are positive. Citing an article may not be an endorsement. (Kisely 2019)
4 – Be careful labelling a journal “predatory” without careful research. Two journals may have the same name, a journal may have been legitimate before being sold to a questionable publisher, or legitimate articles may end up in questionable publications.
5 – Create a list of positive practices.
NOTES
¹ Authors hesitate to use “predatory” since it implies intent. Some journals are just bad science. I use the term “questionable” to include any journals, publishers, journal indexes or quality metrics whose practices are possibly or probably predatory. The term alternative implies that these are sources researchers could use. Some may be but most are not.
² The new Clarivate Master Journal List is difficult to use. Register to download the entire master list: https://mjl.clarivate.com/login;createAccount=false;referrer=%2Fcollection-list-downloads
³ To find the Scopus list scroll down the page to journal content. https://www.elsevier.com/solutions/scopus/how-scopus-works/content
REFERENCES
Anderson, R. (Oct 2019). Citation Contamination: References to predatory journals in the legitimate scientific literature.
Comment. (11 Dec 2019). Predatory journals: No definition, no defence. Nature. Retrieved at https://www.nature.com/articles/d41586-019-03759-y#ref-CR4
Kiely, S. (2019). Predatory journals and dubious publishers: How to avoid being their prey. BJPsych Advances, 25(2) pp 113-119 cites two of Anderson’s journals in illustrating how these two both published the same scam articles.
Pagell, R.A. (2014) Insights Into InCites: journal citation reports and essential science indicators. Online Searcher 38 (6) 16-19
Taylor & Francis Researcher Survey (Oct 2019). Retrieve the entire survey at https://authorservices.taylorandfrancis.com/researcher-survey-2019/
Xia, JF (2019). A preliminary study of alternative open access journal indexes. Publishing Research Quarterly 35, pg. 274-284. Request full text from authors through ResearchGate. https://www.researchgate.net/publication/330868404_A_Preliminary_Study_of_Alternative_Open_Access_Journal_Indexes
Xia, JF & Smith MP (30 Aug 2018). Alternative journal impact factors in open access publishing. Learned Publishing 31(4) pg. 403-411. Request full text from authors through ResearchGate: https://www.researchgate.net/publication/327322862_Alternative_journal_impact_factors_in_open_access_publishing
Ruth’s Rankings
A list of Ruth’s Rankings and News Updates is here.
*Ruth A. Pagell is emeritus faculty librarian at Emory University. After working at Emory, she was the founding librarian of the Li Ka Shing Library at Singapore Management University and then adjunct faculty [teaching] in the Library and Information Science Program at the University of Hawaii. She has written and spoken extensively on various aspects of librarianship, including contributing articles to ACCESS – https://orcid.org/0000-0003-3238-9674