Pablo Haya Coll

Pablo Haya Coll

Pablo Haya Coll
Cargo

Researcher at the Computer Linguistics Laboratory of the Autonomous University of Madrid (UAM) and director of Business & Language Analytics (BLA) of the Institute of Knowledge Engineering (IIC)

A tool capable of adding a watermark to AI-generated text to detect it has been developed

A study published in the journal Nature describes a tool capable of inserting watermarks into text generated by large linguistic models - artificial intelligence (AI) systems - thereby improving their ability to identify and track artificially created content. The tool uses a sampling algorithm to subtly bias the model's choice of words, inserting a signature that can be recognised by the detection software.

0

Nobel Prize in Physics for Hinton and Hopfield for discovering the basis of machine learning with artificial neural networks

The Royal Swedish Academy of Sciences has awarded the Nobel Prize in Physics 2024 to researchers John J. Hopfield and Geoffrey E. Hinton for discovering the foundations that enable machine learning with artificial neural networks. Hinton for discovering the foundational basis that enables machine learning with artificial neural networks. This technology, inspired by the structure of the brain, is behind what we now call ‘artificial intelligence’. 

0

Worsens the reliability of large language models, such as generative AI

Large language models - Artificial Intelligence (AI) systems based on deep learning, such as the generative AI that is ChatGPT - are not as reliable as users expect. This is one of the conclusions of international research published in Nature involving researchers from the Polytechnic University of Valencia. According to the authors, in comparison with the first models and taking into account certain aspects, reliability has worsened in the most recent models, such as GPT-4 with respect to GPT-3.

0

AI models trained on AI-generated data can crash

Using artificial intelligence (AI)-generated datasets to train future generations of machine learning models can contaminate their results, a concept known as ‘model collapse’, according to a paper published in Nature. The research shows that, within a few generations, original content is replaced by unrelated nonsense, demonstrating the importance of using reliable data to train AI models.

0

Reactions: the Prime Minister announces the design of a foundational model of an artificial intelligence language trained in Spanish

The President of the Government, Pedro Sánchez, announced last night at the welcome dinner of the GSMA Mobile World Congress (MWC) Barcelona 2024, the construction of a foundational model of artificial intelligence language, trained in Spanish and co-official languages, in open and transparent code, and with the intention of incorporating Latin American countries. For its development, the Government will work with the Barcelona Supercomputing Center and the Spanish Supercomputing Network, together with the Spanish Academy of Language and the Association of Spanish Language Academies.

0

Reaction: ChatGPT influences users with inconsistent moral judgements

A study says that ChatGPT makes contradictory moral judgements, and that users are influenced by them. Researchers asked questions such as: Would it be right to sacrifice one person to save five others?” Depending on the phrasing of the question, ChatGPT sometimes answered in favour of sacrifice, and sometimes against. Participants were swayed by ChatGPT's statements and underestimated the chatbot's influence on their own judgement. The authors argue that chatbots should be designed to decline giving moral advice, and stress the importance of improving users' digital literacy. 

 

0

Reactions: ChatGPT algorithms could help identify Alzheimer's cases

Artificial intelligence algorithms using ChatGPT - the OpenAI company's GPT-3 language model - can identify speech features to predict the early stages of Alzheimer's disease with 80 per cent accuracy. The neurodegenerative disease causes a loss of the ability to express oneself that the algorithms could recognise, according to the journal PLOS Digital Health.

0

Reactions to artificial intelligence linguistic analysis showing that the term 'people' is biased towards 'men'

A study of more than 630 billion words (mostly in English) used on 3 billion web pages concludes that the term 'people' is not gender-neutral: its meaning is biased towards the concept 'men'. The authors write in Science Advances that they see this as "a fundamental bias in the collective view of our species", relevant because the concept 'people' is "in almost all societal decisions and policies".

0