misinformation

misinformation

misinformation

A review identifies what most influences polarization on health issues

A study by the University of Cádiz has identified six factors that drive polarization around health-related issues, for example during crises such as the COVID-19 pandemic: political ideology, misinformation, social media dynamics, trust in institutions and professionals, risk perception, and socioeconomic factors. This review, published in Science Advances, brings together the conclusions of 90 previous studies and analyzes how these determinants exacerbate health inequalities and influence compliance with public health measures.

0

The language models used by tools such as ChatGPT fail to identify users' erroneous beliefs

Large language models (LLMs) do not reliably identify people's false beliefs, according to research published in Nature Machine Intelligence. The study asked 24 such models – including DeepSeek and GPT-4o, which uses ChatGPT – to respond to a series of facts and personal beliefs through 13,000 questions. The most recent LLMs were more than 90% reliable when comparing whether data was true or false, but they found it difficult to distinguish between true and false beliefs when responding to a sentence beginning with ‘I believe that’.

0

Controversy over the Trump administration's proposal of leucovorin as a treatment for autism

At a press conference at the White House on Monday, Donald Trump and health authorities linked the use of paracetamol during pregnancy to cases of autism. They also recommended leucovorin as a treatment for autism. Immediately afterwards, the US Food and Drug Administration (FDA) announced in a press release that it had begun the approval process for calcium leucovorin tablets for patients with cerebral folate deficiency. “People with cerebral folate deficiency have been observed to have developmental delays with autistic characteristics, seizures, and movement and coordination problems,” they said. The update on the use of the drug, discussed by the scientific community, will authorize the treatment of children with autism spectrum disorder.

 

0

The Spanish population trusts science, but demands more communication and citizen engagement, according to FECYT's survey on social perception

Spanish citizens trust science and researchers, and want them to be more involved in the issues that affect people's lives. Television and social media are the most commonly used channels for obtaining information on these topics. 81.4% recognise that climate change is a serious problem and, with regard to AI, although more than 80% use it, there is concern about its risks and governance. These figures come from the latest edition of the FECYT's biennial Social Perception of Science and Technology Survey (EPSCT) 2024.

 

 

 

 

 

 

 

 

0

A study analyses how competition between media outlets can lead them to spread misinformation

Competition to attract audiences encourages media outlets to spread misinformation, according to a study published in Science Advances. The research applies a mathematical framework - called a zero-sum game - to analyse the dynamics between immediate media benefits and long-term damage. The model showed how an ‘arms race’ can emerge between news sources: when one player resorts to misinformation, the other has to do the same in order to compete.

0

Are extreme weather events the only threat from climate change?

Despite the overwhelming evidence, climate change denialist messages have found a loudspeaker in certain social networks. Of importance in this disinformation strategy is the attempt to discredit the scientific community in general, and climate researchers and weather forecasters in particular. However, the study of climate and the prediction and monitoring of adverse weather phenomena is in the interest of society as a whole.

1

Hate speech has increased by 50 % on the social network X after its purchase by Elon Musk

A team of researchers from the University of California (USA) has analyzed the presence of hate speech on the social network X (formerly Twitter) since its purchase by Elon Musk in October 2022 until June 2023. Their findings are that this type of racist, homophobic and transphobic speech increased by approximately 50 % throughout this period. In addition, the presence of bots and fake accounts did not decrease, contrary to Musk's own promises. The results are published in the journal Plos One. 

0

Outrage facilitates the spread of misinformation on social networks

According to a study published in Science, social media content containing misinformation provokes more moral outrage than content containing accurate information, and this outrage facilitates the spread of misinformation. In addition, the results also showed that people are more likely to share this outrage-provoking misinformation without reading it first. 

0

Conversing with a chatbot helps to reduce beliefs in conspiracy theories

People who believe in conspiracy theories can revise their opinions after conversing with a chatbot that presents them with "sufficiently compelling evidence", according to a study of 2,190 people published in Science. Other hypotheses propose that believing in conspiracies satisfies important psychological needs and that offering information is not enough to change these beliefs. 

0

Reaction: Study warns of lack of literacy about deepfakes during wartime

A research project has analysed the Twitter discourse related to deepfakes in the context of the Russia-Ukraine war in 2022, studying almost 5,000 tweets related to these videos. Deepfakes are synthetic media that mix an original video with content generated by artificial intelligence, often with the aim of mimicking a person. The research, published in PLoS ONE, looks at the lack of literacy about deepfakes and the scepticism and misinformation that can arise when real media is mistakenly identified as fake. The authors warn that efforts to raise public awareness of this phenomenon can undermine trust in other legitimate media, that can also be seen as suspect.

0