artificial intelligence

artificial intelligence

artificial intelligence

AI models trained on AI-generated data can crash

Using artificial intelligence (AI)-generated datasets to train future generations of machine learning models can contaminate their results, a concept known as ‘model collapse’, according to a paper published in Nature. The research shows that, within a few generations, original content is replaced by unrelated nonsense, demonstrating the importance of using reliable data to train AI models.

0

Study claims current AI systems are already capable of tricking and manipulating humans

A review article published in Patterns claims that many artificial intelligences have already learned to cheat humans, even those trained to be honest. They cite as an example Meta's CICERO model, which wins by playing dirty Diplomacy. The researchers describe potential risks in problems related to security, fraud and election rigging, and call on governments to urgently develop strict regulations. 

0

Reactions: the Prime Minister announces the design of a foundational model of an artificial intelligence language trained in Spanish

The President of the Government, Pedro Sánchez, announced last night at the welcome dinner of the GSMA Mobile World Congress (MWC) Barcelona 2024, the construction of a foundational model of artificial intelligence language, trained in Spanish and co-official languages, in open and transparent code, and with the intention of incorporating Latin American countries. For its development, the Government will work with the Barcelona Supercomputing Center and the Spanish Supercomputing Network, together with the Spanish Academy of Language and the Association of Spanish Language Academies.

0

Reactions: EU institutions agree on artificial intelligence law

After lengthy negotiations, the European Commission, the European Parliament and the Council of the EU - which represents the member states - reached a provisional agreement last night on the content of the 'AI Act', the future law that will regulate the development of artificial intelligence in Europe, the first in the world. The agreement limits the use of biometric identification systems by security forces, includes rules for generative AI models such as ChatGPT and provides for fines of up to 35 million euros for those who violate the rules, among other measures. The text must now be formally adopted by the Parliament and Council before it becomes EU law.

 

0

Reaction: Study warns of lack of literacy about deepfakes during wartime

A research project has analysed the Twitter discourse related to deepfakes in the context of the Russia-Ukraine war in 2022, studying almost 5,000 tweets related to these videos. Deepfakes are synthetic media that mix an original video with content generated by artificial intelligence, often with the aim of mimicking a person. The research, published in PLoS ONE, looks at the lack of literacy about deepfakes and the scepticism and misinformation that can arise when real media is mistakenly identified as fake. The authors warn that efforts to raise public awareness of this phenomenon can undermine trust in other legitimate media, that can also be seen as suspect.

0

Reaction: An artificial intelligence method shows human-like generalization ability

In a work published in the journal Nature, two researchers from New York University (USA) and Pompeu Fabra University in Barcelona claim to have demonstrated that an artificial neural network can have systematic generalization capabilities similar to human ones, that is, to learn new concepts and combine them with existing ones. This statement calls into question the 35-year-old idea that neural networks are not viable models of the human mind.

0

Reaction: Artificial protein designed to degrade microplastics

Based on a defense protein of the strawberry anemone, researchers from the Barcelona Supercomputing Center, CSIC and the Complutense University of Madrid have designed, through artificial intelligence and the use of supercomputers, an artificial protein capable of degrading PET micro and nanoplastics, such as those used in bottles. According to the authors, its efficiency is between 5 and 10 times higher than that of the proteins currently used and it works at room temperature. The results are published in the journal Nature Catalysis.

0

Victims of artificial intelligence

There are no criminal offences to punish synthetic pornography, plus we lack sufficient means to carry out forensic examinations of victims' and perpetrators' phones and staff to process these cases quickly. The law could limit these AI tools to professional and virtuous environments, only by known developers, and for products whose purposes don't violate public order or privacy, and aren't criminal; such measures would be more than enough.

1

Reaction: Speech deepfakes fool humans even if they are trained to detect them

Speech deepfakes are synthetic voices produced by machine learning models that can resemble real human voices. Research published in PLoS ONE involving half a thousand participants shows that they were able to correctly identify that they were not real voices 73% of the time. The results of the study—conducted in English and Mandarin—showed only a slight improvement in those people who were specifically trained to spot these deepfakes.

0