artificial intelligence

artificial intelligence

artificial intelligence

Generative AI expansion could create up to five million tonnes of e-waste

The development of generative artificial intelligence, and in particular of large language models, could generate between 1.2 and 5 million tonnes of accumulated electronic waste between 2020 and 2030, according to a study published in Nature Computational Science. The study estimates the mass of waste generated by hardware elements such as processing units, storage units and power supply systems.

0

A tool capable of adding a watermark to AI-generated text to detect it has been developed

A study published in the journal Nature describes a tool capable of inserting watermarks into text generated by large linguistic models - artificial intelligence (AI) systems - thereby improving their ability to identify and track artificially created content. The tool uses a sampling algorithm to subtly bias the model's choice of words, inserting a signature that can be recognised by the detection software.

0

Open or Closed Artificial Intelligence: How Science Suffers When Technology is in the Hands of Big Companies

Two of the Nobel Prize winners in Chemistry 2024 are employees of Google DeepMind, who caused significant unrest among their colleagues in May. Hassabis and Jumper announced in *Nature* the results of their AlphaFold 3 model, with applications in drug design; however, they published it in a closed manner, with reviewers not even having access to the system, which contradicts the basic principles of scientific publication. We risk having the transformative potential of AI controlled by big tech companies.

1

Nobel Prize in Chemistry for Baker, Hassabis and Jumper for computational protein design and structure prediction

The Royal Swedish Academy of Sciences has awarded the Nobel Prize in Chemistry 2024 on the one hand to David Baker for computational protein design, which makes it possible to construct proteins with functions not present in nature. On the other hand, jointly to Demis Hassabis and John M. Jumper of Google DeepMind, for the development of AlphaFold2, which allows the structure of the 200 million known proteins to be predicted at high speed. 

0

Nobel Prize in Physics for Hinton and Hopfield for discovering the basis of machine learning with artificial neural networks

The Royal Swedish Academy of Sciences has awarded the Nobel Prize in Physics 2024 to researchers John J. Hopfield and Geoffrey E. Hinton for discovering the foundations that enable machine learning with artificial neural networks. Hinton for discovering the foundational basis that enables machine learning with artificial neural networks. This technology, inspired by the structure of the brain, is behind what we now call ‘artificial intelligence’. 

0

Worsens the reliability of large language models, such as generative AI

Large language models - Artificial Intelligence (AI) systems based on deep learning, such as the generative AI that is ChatGPT - are not as reliable as users expect. This is one of the conclusions of international research published in Nature involving researchers from the Polytechnic University of Valencia. According to the authors, in comparison with the first models and taking into account certain aspects, reliability has worsened in the most recent models, such as GPT-4 with respect to GPT-3.

0

AstraZeneca's new AI tool could predict more than a thousand diseases before diagnosis

A study published today in Nature Genetics examines AstraZeneca's new tool, MILTON, which uses artificial intelligence to detect biomarkers and predict diseases before they are diagnosed. According to this analysis, the tool could potentially predict over a thousand diseases and may even be more effective than the currently available polygenic risk scores.

0

AI models trained on AI-generated data can crash

Using artificial intelligence (AI)-generated datasets to train future generations of machine learning models can contaminate their results, a concept known as ‘model collapse’, according to a paper published in Nature. The research shows that, within a few generations, original content is replaced by unrelated nonsense, demonstrating the importance of using reliable data to train AI models.

0

Study claims current AI systems are already capable of tricking and manipulating humans

A review article published in Patterns claims that many artificial intelligences have already learned to cheat humans, even those trained to be honest. They cite as an example Meta's CICERO model, which wins by playing dirty Diplomacy. The researchers describe potential risks in problems related to security, fraud and election rigging, and call on governments to urgently develop strict regulations. 

0

Reactions: the Prime Minister announces the design of a foundational model of an artificial intelligence language trained in Spanish

The President of the Government, Pedro Sánchez, announced last night at the welcome dinner of the GSMA Mobile World Congress (MWC) Barcelona 2024, the construction of a foundational model of artificial intelligence language, trained in Spanish and co-official languages, in open and transparent code, and with the intention of incorporating Latin American countries. For its development, the Government will work with the Barcelona Supercomputing Center and the Spanish Supercomputing Network, together with the Spanish Academy of Language and the Association of Spanish Language Academies.

0