This article is 10 months old
Reaction to an interface capable of reconstructing long sentences from brain images

A US team has developed a non-invasive language decoder: a brain-computer interface that aims to reconstruct whole sentences from functional magnetic resonance imaging (fMRI). This is not the first attempt to create such a decoder; some of the existing ones are invasive - requiring neurosurgery; others are non-invasive, but only identify words or short phrases. In this case, as reported in the journal Nature Neuroscience, the team recorded brain responses - captured with fMRI - of three participants as they listened to 16 hours of stories. The authors used this data to train the model, which was then able to decode other fMRI data from the same person listening to new stories. The team argues that the model trained on one person's data does not decode another person's data well, suggesting that cooperation from the subject is required for the model to work properly.

01/05/2023 - 17:00 CEST
 
Expert reactions

David Rodríguez - interfaz EN

David Rodríguez-Arias Vailhen

Vice-director of FiloLab and professor of Bioethics at the University of Granada

Science Media Centre Spain

We have all deplored, at some point, not having been able to record or write down a thought: a brilliant idea or beautiful image we had and could not remember, because we did not have a pen at hand, or because when we transcribed it, it lost the vividness and precision with which we imagined it. Some musicians would kill to be able to transcribe the notes of the melodies they imagine while dreaming, but which escape them irreversibly as soon as they wake up. Wouldn't it be fabulous if there were machines able to read minds and transcribe thought?  

The study 'Semantic reconstruction of continuous language from non-invasive brain recordings' brings us closer to that scenario. This research demonstrates the ability to 'decode' the minds of people who can communicate without articulating words, to the point where it is possible to determine whether they are telling the story of Little Red Riding Hood or the Three Little Pigs. To get there, a combination of technologies –Magnetic Resonance Imaging and a receiver of neural signals that are sent to a brain-computer interface (BCI)--must be applied to them simultaneously, and they must undergo a training process that takes hours. But as advances go, it's not a bad one. The findings go beyond what has been achieved so far by BCIs, which had already achieved much more rudimentary translations of thought. Philosophically speaking, the findings offer a possible way to overcome the perplexity created by the transformation of brain anatomy and physiology into symbols and thoughts: to understand the emergence of the mind (which is not just language). 

As is often the case with any technological advance, this one also raises a warning of responsibility. If a machine can end up reading your mind, once trained, it could be possible –involuntarily and without your consent (for example, while you sleep)—for it to translate snippets of your thoughts. Our mind has so far been the guardian of our privacy. We can jealously keep certain thoughts to ourselves, the most unspeakable ones, if we want to. This discovery could be a first step towards compromising that freedom in the future. It would then be better for us to only dream up beautiful melodies. 

The author has declared they have no conflicts of interest
EN
Publications
Semantic reconstruction of continuous language from non-invasive brain recordings
  • Research article
  • Peer reviewed
  • People
Journal
Nature Neuroscience
Publication date
Authors

Jerry Tang et al.

Study types:
  • Research article
  • Peer reviewed
  • People
The 5Ws +1
Publish it
FAQ
Contact