Ismael Ràfols
Researcher on the evaluation of science at Leiden University (Netherlands) and at the Ingenio Institute (CSIC-UPV)
I don't think the article represents anything particularly new. Bibliometrics experts have considered for many years (at least since the 1990s) that the number of citations an article receives is a better indicator of scientific visibility than the impact factor or the journal's prestige.
Furthermore, there has always been a consensus that bibliometrics should not be used individually because, from a statistical point of view, there is little signal relative to the background noise and a high probability of bias. Principle number 7 of the Leiden Manifesto states that "the individual evaluation of researchers should be based on the qualitative assessment of their research portfolio."
Along these lines, the current consensus on evaluation, according to the Coalition for Evaluation Reform (CoARA) (signed by the main European and Spanish organizations), is that a diversity of contributions must be evaluated, most of which cannot be derived from bibliometric metrics. CoARA explains that indicators can be helpful, but the evaluation must be based on peer review.
The article also offers no surprises regarding geographical, gender, social group, and linguistic biases. There are other, more interesting articles on the subject, for example, this one on language or this other one that encompasses language, economics, and gender.