Autor/es reacciones

Enrique Orduña Malea

Full professor at the Polytechnic University of Valencia and member of the Evaluation and Monitoring Committee of the ANECA state accreditation system

The study raises an interesting issue, namely verifying the existence of highly cited publications in journals with a low impact factor. This raises relevant questions in the design of policies for evaluating scientific activity, with clear effects on recruitment and promotion processes in research centres.

However, this is an issue that has already been extensively addressed in the scientific literature. The potential of the work lies in the extensive dataset of publications in biomedicine and the wide variety of variables analysed.

On the downside, I believe that its structure is not the most appropriate for a social science paper. Although it is published in PLoS Biology and analyses publications in biomedicine, it is a social science article. In addition, it presents a lot of data and graphs but with little depth and discussion from a bibliometric perspective.

The article does not include the existing discussion in the literature on the use of the impact factor in evaluation processes, which is extensive, diverse and involves different schools of thought. A more in-depth review of the literature is needed to contextualise the results shown in a specific field of knowledge.

Furthermore, the article does not take into account all the reforms in research evaluation carried out in many countries in recent years. In Spain, for example, the university sector no longer evaluates according to the impact factor of journals. Both the accreditation process and research sections were reformed by ANECA and now consider a wide range of indicators and signs of quality beyond the value of the journal. Therefore, the process of change has been officially operational in Spain for two years.

It is true that the impact factor still has more power in competitive processes in some fields, but if it is used as a determining factor, this would be contrary to the principles of COARA, to which many universities and centres have adhered.

On the other hand, given that journals with a high impact factor are elitist journals, with high APCs [article processing charges], and in very specific topics and fields, publications within these journals present biases (by age, gender, race, etc.). If we open up the number of journals to consider, these biases are softened, although not eliminated. This result is obvious, but it is important that they highlight it and provide figures to support it.

The main limitation of the study, in my opinion, is the procedure for determining the high citation value of the works. The authors use the median RCR [relative citation index] of publications in journals with an impact factor greater than 15. I believe this approach is a mistake.

It is well known that the distribution of citations among articles in a journal is highly skewed. In other words, a few papers receive many citations, while the majority receive few or none. This means that the impact factor of journals is constructed from the impact of a few papers. Although the authors use a weighted indicator (RCR), this does not avoid this problem. I do not see how the median RCR of publications in these journals can be a standard of “citability”.

There are other methods of comparability that I believe would have been better. All in all, the result is obvious: we already know that there are highly cited articles in journals with low impact factors and un-cited articles in journals with high impact factors.

On the other hand, the underlying problem is that the authors continue to rely on citations as a fundamental element in the evaluation. Publication-level metrics go beyond citations, which are important, but they are not the holy grail. Citations can occur for a multitude of reasons, so the mere accumulation of citations does not necessarily constitute “impact” or “influence”. Evaluation processes require the use of a wide variety of indicators, as well as expert judgement, in order to be carried out rigorously. Furthermore, they will depend on the objectives of the evaluation process, which can vary greatly.

Today, mere co-authorship (in any journal, whether high impact or not) or citation is not synonymous with quality, impact or reputation.

High-impact journals are indicative of how difficult it is to publish in those journals. They serve as a signal of those journals that are in high demand by the community to publish their work. They are journals with a powerful brand image, accumulated prestige, and a great capacity for dissemination and attention-grabbing.

This undoubtedly influences people's behaviour, as they prefer to cite works published in high-impact journals because this can help convince those evaluating the work or even attract the attention of those cited, and it also helps them to show a level of “prestige” in evaluation processes. However, this mark of quality and reputation is a “social construct”.

High-impact journals are attractive for receiving good work; it is also assumed that there will be more rigorous peer review than in other journals, as they are journals with many submissions and, therefore, a high rejection rate. However, this is the theory. It does not guarantee that the review will be of high quality (this depends on many variables) or that if published in one of these journals it will be relevant or cited.

It should be understood that a journal is a conglomeration of good, average and poor articles. This does not mean that journal metrics are useless. They can be informative about editorial quality or even the ability to publish relevant work within a discipline. The ability to publish in high-impact journals is only one small aspect of the factors that should be considered in the evaluation process of an individual. The article does not question the value of high-impact journals, but rather the evaluation processes that focus solely on publication in certain journals.

EN