1. Not making the subject of the study clear from the start
Studies are made up of experiments whose design always has strengths and weaknesses. One of the first questions to consider is who the subject of the article is. Is it an animal study? A survey of university students? A mathematical model? An in vitro experiment with cells? Is it a clinical trial? If so, what stage is it at?
For example, a study might find that orange juice eliminates SARS-CoV-2, but if it has been done with a cell culture in laboratory conditions, it is important to make it clear that it is not a therapy with clinical application at this stage.
2. Confusing correlation with causation
The conclusions of a study often show a correlation between variables, but this association does not imply that there is causation between the two. Although papers often make this clear, this is not always well communicated by the authors and is not always reflected in journal articles.
For example, a study might conclude that voters of a certain party are more likely to be overweight. This does not mean that gaining weight will make us vote for that party, nor does it mean that voting for that party will make us fat.
3. Making impossible extrapolations
Every scientific study attempts to answer one or more specific questions, but this does not mean that it can answer others that go beyond the object of the research.
For example, the fact that a new drug works in mice does not imply that it will work in humans. The fact that a survey shows results in students at a university in the United States does not mean that these results would be identical if it were carried out on the Spanish population.
4. Ignoring the limitations of the study
All studies have limitations and most point out some of them in the text itself. Sometimes these limitations may invalidate the conclusions of the study or require greater caution when communicating the results. It is important to look for them in the paper and to talk to independent researchers who can highlight any limitations or problems with the work.
5. Reproducing press releases without a critical eye
One mission of the communication offices of universities, research centres and companies is to inform society about the work of their research staff through media coverage. To this end, they send out press releases summarising the articles and making it easier for journalists to take them into account as material for their reports. Sometimes the headlines of the press releases are as attractive as those of any media outlet; the problem is that, like the media, they can also fall into clickbait, missing the context of the information or omitting the limitations of the work. Science journalists should also read press releases critically.
6. Justifying any claim with the phrase “according to a study”
A science newsroom receives hundreds of press releases a day about research results. Part of the job of science journalists is acting as ‘selectors’ of the science information that reaches the general public. And be careful: one study in isolation is not the truth, nor does it have to be. Peer review is not a certificate of quality or irrefutability. It is an indicator that the journal editors and reviewers consider the results worthy of being published in a particular journal and discussed by the academic community. No more, no less.
7. Being too quick to believe that the study is “revolutionary”
Consensus in science is a complicated thing that is slowly forged over years and decades of research. It is rare for a single paper to overturn decades of evidence, so when a ground-breaking discovery is announced, it is good practice to check whether it really is ground-breaking. We should also be cautious about claiming that something has been achieved “for the first time”. This may well be the case, but it is worth consulting the previous scientific literature to make sure of this.
8. Not being careful with preprints and congresses
In both cases, these are scientific results that have not yet undergone peer review, i.e. their conclusions have not yet been validated by the rest of the scientific community. Consequently, when reporting on them, it is all the more reason for the journalist to check them against independent sources and to make it clear that they have not been peer-reviewed.
9. Ignoring conflicts of interest
We can all have conflicts of interest related to our work; scientists have conflicts of interest, which is why in science conflicts of interest must be declared. This does not imply that a source is unreliable, but simply that you need to take them into account to weigh their opinions about the subject matter.
10. Writing for your sources and not for your audience
This is a classic mistake, especially when starting out in science journalism. Sometimes, when reporting on very complex topics, we run the risk of being so close to our scientific sources that we forget who we are writing for: the audience, who are neither obliged to be interested in science nor preparing for an exam.
When writing complex information for a non-expert audience, it is important to keep this rule in mind: the more comprehensive, the less clear. If we write a news story with all the details of the research, we will certainly please the scientist who has helped us with the article, but our audience will flee elsewhere. And while readers have the right to leave the site if they are not interested in our information, a journalist does not have the right to chase readers away.
So, it is worth remembering Tim Radford's ‘commandment’ in A manifesto for the simple scribe – my 25 commandments for journalists. The Guardian: “You are not writing to impress the scientist you have just interviewed, nor the professor who got you through your degree, nor the editor who foolishly turned you down, or the rather dishy person you just met at a party and told you were a writer. Or even your mother. You are writing to impress someone hanging from a strap in the tube between Parson's Green and Putney, who will stop reading in a fifth of a second, given a chance.”