Most researchers would receive more recognition if their work were evaluated independently of the journal in which it is published
A team from the United States has used data from health studies to analyze the extent to which prestigious journals capture or ignore science considered influential. Their findings indicate that most of the most cited articles—thus considered most influential—are published in journals not ranked among the most prestigious. According to the study, approximately half of all researchers never publish in a journal with an impact factor above 15, which, according to certain evaluation systems, could exclude them from opportunities. However, overall, traditional journal-based measures may only recognize between 10% and 20% of influential work. The results are published in Plos Biology.
Ortega - Revistas (EN)
José Luis Ortega
Senior Scientist at the Institute of Advanced Social Studies (IESA-CSIC)
The article is highly relevant because it empirically demonstrates a fact already known theoretically: that the impact of a journal cannot be directly associated with the impact of its published articles. The reason is that journal indicators like the Journal Impact Factor are based on the average citation count of articles published within a specific timeframe. This distribution is highly skewed, following a power law, meaning that the mean is influenced by the most extreme values. Thus, only a small fraction of the most cited articles (10-20%) contribute to the journal's impact factor, while the rest contribute little or even offset it. Therefore, the article demonstrates that some articles have a normalized real impact (RCR) higher than their corresponding journal impact.
However, the study is limited to the field of biomedicine (using PubMed as its data source). Although this phenomenon is also expected in other disciplines, we don't know if it is more or less significant in certain areas. Another limitation that could explain why the study finds more articles above than below the journal's impact factor is that it excludes authors who have received NIH (National Institutes of Health) funding. This could bias against established US authors who are likely to produce higher-quality, above-average results.
The issue is not that the results discredit the journals themselves, but rather the evaluation of articles using them. Journal impact factors serve their purpose of generating a parameter that allows us to assess them. Those with higher impact factors indicate that they publish higher-quality results. This factor also has a gravitational effect (caused by the erroneous evaluation of articles based on the journal). Journals with higher impact factors tend to attract more submissions, and therefore may select better publications that further reinforce their impact. There is a pull effect.
Ràfols - Revistas
Ismael Ràfols
Researcher on the evaluation of science at Leiden University (Netherlands) and at the Ingenio Institute (CSIC-UPV)
I don't think the article represents anything particularly new. Bibliometrics experts have considered for many years (at least since the 1990s) that the number of citations an article receives is a better indicator of scientific visibility than the impact factor or the journal's prestige.
Furthermore, there has always been a consensus that bibliometrics should not be used individually because, from a statistical point of view, there is little signal relative to the background noise and a high probability of bias. Principle number 7 of the Leiden Manifesto states that "the individual evaluation of researchers should be based on the qualitative assessment of their research portfolio."
Along these lines, the current consensus on evaluation, according to the Coalition for Evaluation Reform (CoARA) (signed by the main European and Spanish organizations), is that a diversity of contributions must be evaluated, most of which cannot be derived from bibliometric metrics. CoARA explains that indicators can be helpful, but the evaluation must be based on peer review.
The article also offers no surprises regarding geographical, gender, social group, and linguistic biases. There are other, more interesting articles on the subject, for example, this one on language or this other one that encompasses language, economics, and gender.
Enrique Orduña - revistas impacto
Enrique Orduña Malea
Full professor at the Polytechnic University of Valencia and member of the Evaluation and Monitoring Committee of the ANECA state accreditation system
The study raises an interesting issue, namely verifying the existence of highly cited publications in journals with a low impact factor. This raises relevant questions in the design of policies for evaluating scientific activity, with clear effects on recruitment and promotion processes in research centres.
However, this is an issue that has already been extensively addressed in the scientific literature. The potential of the work lies in the extensive dataset of publications in biomedicine and the wide variety of variables analysed.
On the downside, I believe that its structure is not the most appropriate for a social science paper. Although it is published in PLoS Biology and analyses publications in biomedicine, it is a social science article. In addition, it presents a lot of data and graphs but with little depth and discussion from a bibliometric perspective.
The article does not include the existing discussion in the literature on the use of the impact factor in evaluation processes, which is extensive, diverse and involves different schools of thought. A more in-depth review of the literature is needed to contextualise the results shown in a specific field of knowledge.
Furthermore, the article does not take into account all the reforms in research evaluation carried out in many countries in recent years. In Spain, for example, the university sector no longer evaluates according to the impact factor of journals. Both the accreditation process and research sections were reformed by ANECA and now consider a wide range of indicators and signs of quality beyond the value of the journal. Therefore, the process of change has been officially operational in Spain for two years.
It is true that the impact factor still has more power in competitive processes in some fields, but if it is used as a determining factor, this would be contrary to the principles of COARA, to which many universities and centres have adhered.
On the other hand, given that journals with a high impact factor are elitist journals, with high APCs [article processing charges], and in very specific topics and fields, publications within these journals present biases (by age, gender, race, etc.). If we open up the number of journals to consider, these biases are softened, although not eliminated. This result is obvious, but it is important that they highlight it and provide figures to support it.
The main limitation of the study, in my opinion, is the procedure for determining the high citation value of the works. The authors use the median RCR [relative citation index] of publications in journals with an impact factor greater than 15. I believe this approach is a mistake.
It is well known that the distribution of citations among articles in a journal is highly skewed. In other words, a few papers receive many citations, while the majority receive few or none. This means that the impact factor of journals is constructed from the impact of a few papers. Although the authors use a weighted indicator (RCR), this does not avoid this problem. I do not see how the median RCR of publications in these journals can be a standard of “citability”.
There are other methods of comparability that I believe would have been better. All in all, the result is obvious: we already know that there are highly cited articles in journals with low impact factors and un-cited articles in journals with high impact factors.
On the other hand, the underlying problem is that the authors continue to rely on citations as a fundamental element in the evaluation. Publication-level metrics go beyond citations, which are important, but they are not the holy grail. Citations can occur for a multitude of reasons, so the mere accumulation of citations does not necessarily constitute “impact” or “influence”. Evaluation processes require the use of a wide variety of indicators, as well as expert judgement, in order to be carried out rigorously. Furthermore, they will depend on the objectives of the evaluation process, which can vary greatly.
Today, mere co-authorship (in any journal, whether high impact or not) or citation is not synonymous with quality, impact or reputation.
High-impact journals are indicative of how difficult it is to publish in those journals. They serve as a signal of those journals that are in high demand by the community to publish their work. They are journals with a powerful brand image, accumulated prestige, and a great capacity for dissemination and attention-grabbing.
This undoubtedly influences people's behaviour, as they prefer to cite works published in high-impact journals because this can help convince those evaluating the work or even attract the attention of those cited, and it also helps them to show a level of “prestige” in evaluation processes. However, this mark of quality and reputation is a “social construct”.
High-impact journals are attractive for receiving good work; it is also assumed that there will be more rigorous peer review than in other journals, as they are journals with many submissions and, therefore, a high rejection rate. However, this is the theory. It does not guarantee that the review will be of high quality (this depends on many variables) or that if published in one of these journals it will be relevant or cited.
It should be understood that a journal is a conglomeration of good, average and poor articles. This does not mean that journal metrics are useless. They can be informative about editorial quality or even the ability to publish relevant work within a discipline. The ability to publish in high-impact journals is only one small aspect of the factors that should be considered in the evaluation process of an individual. The article does not question the value of high-impact journals, but rather the evaluation processes that focus solely on publication in certain journals.
Isidro F. Aguillo Caño - revistas impacto EN
Isidro F. Aguillo Caño
Head of the Cybermetrics Laboratory and Deputy Technical Director of the Institute of Public Goods and Policies (IPP-CSIC)
This work does not contribute anything that was not already known. The distribution of citations in articles follows a power law that is poorly described by a mean value such as the impact factor.
To clarify: only 20% of papers will receive more actual citations than the expected value according to the journal's impact factor, while the remaining 80% will receive far fewer or even no citations.
Ninety-five per cent of journals have impact factors below 10. There will be journals defined as Q1 [first quartile] where most articles will receive 10, five or fewer citations.
The situation is even worse now, because Q1 journals already publish more than 60-70% of indexed papers, so we can hardly consider this group as synonymous with excellence.
Conflicts of interest: ‘I frequently work with the Scimago group, which has developed the SJR indicator, a competitor to the impact factor.’
Arabi et al.
- Research article
- Peer reviewed