José Luis Ortega
Senior Scientist at the Institute of Advanced Social Studies (IESA-CSIC)
The article is highly relevant because it empirically demonstrates a fact already known theoretically: that the impact of a journal cannot be directly associated with the impact of its published articles. The reason is that journal indicators like the Journal Impact Factor are based on the average citation count of articles published within a specific timeframe. This distribution is highly skewed, following a power law, meaning that the mean is influenced by the most extreme values. Thus, only a small fraction of the most cited articles (10-20%) contribute to the journal's impact factor, while the rest contribute little or even offset it. Therefore, the article demonstrates that some articles have a normalized real impact (RCR) higher than their corresponding journal impact.
However, the study is limited to the field of biomedicine (using PubMed as its data source). Although this phenomenon is also expected in other disciplines, we don't know if it is more or less significant in certain areas. Another limitation that could explain why the study finds more articles above than below the journal's impact factor is that it excludes authors who have received NIH (National Institutes of Health) funding. This could bias against established US authors who are likely to produce higher-quality, above-average results.
The issue is not that the results discredit the journals themselves, but rather the evaluation of articles using them. Journal impact factors serve their purpose of generating a parameter that allows us to assess them. Those with higher impact factors indicate that they publish higher-quality results. This factor also has a gravitational effect (caused by the erroneous evaluation of articles based on the journal). Journals with higher impact factors tend to attract more submissions, and therefore may select better publications that further reinforce their impact. There is a pull effect.