Reaction to an interface capable of reconstructing long sentences from brain images
A US team has developed a non-invasive language decoder: a brain-computer interface that aims to reconstruct whole sentences from functional magnetic resonance imaging (fMRI). This is not the first attempt to create such a decoder; some of the existing ones are invasive - requiring neurosurgery; others are non-invasive, but only identify words or short phrases. In this case, as reported in the journal Nature Neuroscience, the team recorded brain responses - captured with fMRI - of three participants as they listened to 16 hours of stories. The authors used this data to train the model, which was then able to decode other fMRI data from the same person listening to new stories. The team argues that the model trained on one person's data does not decode another person's data well, suggesting that cooperation from the subject is required for the model to work properly.