artificial intelligence

artificial intelligence

artificial intelligence

Reaction: Study warns of lack of literacy about deepfakes during wartime

A research project has analysed the Twitter discourse related to deepfakes in the context of the Russia-Ukraine war in 2022, studying almost 5,000 tweets related to these videos. Deepfakes are synthetic media that mix an original video with content generated by artificial intelligence, often with the aim of mimicking a person. The research, published in PLoS ONE, looks at the lack of literacy about deepfakes and the scepticism and misinformation that can arise when real media is mistakenly identified as fake. The authors warn that efforts to raise public awareness of this phenomenon can undermine trust in other legitimate media, that can also be seen as suspect.

0

Reaction: An artificial intelligence method shows human-like generalization ability

In a work published in the journal Nature, two researchers from New York University (USA) and Pompeu Fabra University in Barcelona claim to have demonstrated that an artificial neural network can have systematic generalization capabilities similar to human ones, that is, to learn new concepts and combine them with existing ones. This statement calls into question the 35-year-old idea that neural networks are not viable models of the human mind.

0

Reaction: Artificial protein designed to degrade microplastics

Based on a defense protein of the strawberry anemone, researchers from the Barcelona Supercomputing Center, CSIC and the Complutense University of Madrid have designed, through artificial intelligence and the use of supercomputers, an artificial protein capable of degrading PET micro and nanoplastics, such as those used in bottles. According to the authors, its efficiency is between 5 and 10 times higher than that of the proteins currently used and it works at room temperature. The results are published in the journal Nature Catalysis.

0

Victims of artificial intelligence

There are no criminal offences to punish synthetic pornography, plus we lack sufficient means to carry out forensic examinations of victims' and perpetrators' phones and staff to process these cases quickly. The law could limit these AI tools to professional and virtuous environments, only by known developers, and for products whose purposes don't violate public order or privacy, and aren't criminal; such measures would be more than enough.

1

Reaction: Speech deepfakes fool humans even if they are trained to detect them

Speech deepfakes are synthetic voices produced by machine learning models that can resemble real human voices. Research published in PLoS ONE involving half a thousand participants shows that they were able to correctly identify that they were not real voices 73% of the time. The results of the study—conducted in English and Mandarin—showed only a slight improvement in those people who were specifically trained to spot these deepfakes.

0

Reaction to two methods using artificial intelligence techniques to forecast the weather

Two studies published in the journal Nature use artificial intelligence (AI) to try to predict the weather. One system, trained on nearly 40 years of global weather data, is capable of predicting global weather patterns up to a week in advance. The second, called NowcastNet, combines physics rules and deep learning for immediate prediction of precipitation, including extreme precipitation.

0

Reaction to an interface capable of reconstructing long sentences from brain images

A US team has developed a non-invasive language decoder: a brain-computer interface that aims to reconstruct whole sentences from functional magnetic resonance imaging (fMRI). This is not the first attempt to create such a decoder; some of the existing ones are invasive - requiring neurosurgery; others are non-invasive, but only identify words or short phrases. In this case, as reported in the journal Nature Neuroscience, the team recorded brain responses - captured with fMRI - of three participants as they listened to 16 hours of stories. The authors used this data to train the model, which was then able to decode other fMRI data from the same person listening to new stories. The team argues that the model trained on one person's data does not decode another person's data well, suggesting that cooperation from the subject is required for the model to work properly.

0

Reaction: ChatGPT influences users with inconsistent moral judgements

A study says that ChatGPT makes contradictory moral judgements, and that users are influenced by them. Researchers asked questions such as: Would it be right to sacrifice one person to save five others?” Depending on the phrasing of the question, ChatGPT sometimes answered in favour of sacrifice, and sometimes against. Participants were swayed by ChatGPT's statements and underestimated the chatbot's influence on their own judgement. The authors argue that chatbots should be designed to decline giving moral advice, and stress the importance of improving users' digital literacy. 

 

0

Reaction: artificial intelligence outperforms imaging technicians in assessing cardiac function

The functioning of the heart can be studied by the percentage of blood it pumps with each beat, deduced by imaging techniques. Based on analyses by cardiologists, a US clinical trial concludes that an artificial intelligence model outperforms examinations initially performed by imaging technicians in terms of accuracy. According to the authors, the tool could "save physicians time and minimise the most tedious parts of the process".

0