A research project has analysed the Twitter discourse related to deepfakes in the context of the Russia-Ukraine war in 2022, studying almost 5,000 tweets related to these videos. Deepfakes are synthetic media that mix an original video with content generated by artificial intelligence, often with the aim of mimicking a person. The research, published in PLoS ONE, looks at the lack of literacy about deepfakes and the scepticism and misinformation that can arise when real media is mistakenly identified as fake. The authors warn that efforts to raise public awareness of this phenomenon can undermine trust in other legitimate media, that can also be seen as suspect.
In a work published in the journal Nature, two researchers from New York University (USA) and Pompeu Fabra University in Barcelona claim to have demonstrated that an artificial neural network can have systematic generalization capabilities similar to human ones, that is, to learn new concepts and combine them with existing ones. This statement calls into question the 35-year-old idea that neural networks are not viable models of the human mind.
Based on a defense protein of the strawberry anemone, researchers from the Barcelona Supercomputing Center, CSIC and the Complutense University of Madrid have designed, through artificial intelligence and the use of supercomputers, an artificial protein capable of degrading PET micro and nanoplastics, such as those used in bottles. According to the authors, its efficiency is between 5 and 10 times higher than that of the proteins currently used and it works at room temperature. The results are published in the journal Nature Catalysis.
There are no criminal offences to punish synthetic pornography, plus we lack sufficient means to carry out forensic examinations of victims' and perpetrators' phones and staff to process these cases quickly. The law could limit these AI tools to professional and virtuous environments, only by known developers, and for products whose purposes don't violate public order or privacy, and aren't criminal; such measures would be more than enough.
Speech deepfakes are synthetic voices produced by machine learning models that can resemble real human voices. Research published in PLoS ONE involving half a thousand participants shows that they were able to correctly identify that they were not real voices 73% of the time. The results of the study—conducted in English and Mandarin—showed only a slight improvement in those people who were specifically trained to spot these deepfakes.
A randomized trial of more than 80,000 Swedish women has shown that artificial intelligence is as good as two specialized radiologists working together when it comes to detecting breast cancer, without increasing false positives and reducing the workload by almost half.
Two studies published in the journal Nature use artificial intelligence (AI) to try to predict the weather. One system, trained on nearly 40 years of global weather data, is capable of predicting global weather patterns up to a week in advance. The second, called NowcastNet, combines physics rules and deep learning for immediate prediction of precipitation, including extreme precipitation.
A US team has developed a non-invasive language decoder: a brain-computer interface that aims to reconstruct whole sentences from functional magnetic resonance imaging (fMRI). This is not the first attempt to create such a decoder; some of the existing ones are invasive - requiring neurosurgery; others are non-invasive, but only identify words or short phrases. In this case, as reported in the journal Nature Neuroscience, the team recorded brain responses - captured with fMRI - of three participants as they listened to 16 hours of stories. The authors used this data to train the model, which was then able to decode other fMRI data from the same person listening to new stories. The team argues that the model trained on one person's data does not decode another person's data well, suggesting that cooperation from the subject is required for the model to work properly.
A study says that ChatGPT makes contradictory moral judgements, and that users are influenced by them. Researchers asked questions such as: Would it be right to sacrifice one person to save five others?” Depending on the phrasing of the question, ChatGPT sometimes answered in favour of sacrifice, and sometimes against. Participants were swayed by ChatGPT's statements and underestimated the chatbot's influence on their own judgement. The authors argue that chatbots should be designed to decline giving moral advice, and stress the importance of improving users' digital literacy.
The functioning of the heart can be studied by the percentage of blood it pumps with each beat, deduced by imaging techniques. Based on analyses by cardiologists, a US clinical trial concludes that an artificial intelligence model outperforms examinations initially performed by imaging technicians in terms of accuracy. According to the authors, the tool could "save physicians time and minimise the most tedious parts of the process".