What's on your mind? Neuroscientists may one day find out

10 Mar, 2008

Venturing into the preserve of science fiction and stage magicians, scientists in the United States on March 05 said they had made extraordinary progress towards reading the brain.
The researchers said they had been able to decode signals in a key part of the brain to identify images seen by a volunteer, according to their study, published by the British journal Nature.
The tool used by the University of California at Berkeley neuroscientists is functional magnetic resonance imaging (fMRI), a non-invasive scanner that detects minute flows of blood within the brain, thus highlighting which cerebral areas are triggered by light, sound and touch. Their zone of interest was the visual cortex - a frontal part of the brain that reconstitutes images sent by the retina.
Using two of their number as volunteers, the team built a computational model based on telltale blood-flow patterns in three key areas of the visual cortex. The signatures were derived from 1,750 images of objects, such as horses, trees, buildings and flowers, that were flashed up in front of the subjects.
Using this model, the programme then scanned a new set of 120 brand new pictures to predict what kind of fMRI patterns these would make in the visual cortex. After that, the volunteers themselves looked at the 120 new pictures while being scanned. The computer then matched the measured brain activity against the predicted brain activity, and picked an image that it believed was the closest match.
They notched up a 92-percent success rate with one volunteer, and accuracy was 72 percent in the other. The probability of this happening on the basis of chance - ie the computer picking the right image out of the 120 - is only 0.8 percent.
In an email to AFP, lead author Jack Gallant likened the task to that of a magician who asks a member of the audience to pick a card from a pack, and then figures out which one it was. "Imagine that we begin with a large set of photographs chosen at random," Gallant said.
"You secretly select just one of these and look at it while we measure your brain activity. Given the set of possible photographs and the measurements of your brain activity, the decoder attempts to identify which specific photograph you saw."
The ambitious experiment was taken a stage further, expanding the set of novel images from 120 to up to 1,000. The first volunteer took this test, and accuracy declined, but only slightly, from 92 percent to 82 percent. "Our estimates suggest that even with a set of one billion images - roughly the number of images indexed by Google on the Internet - the decoder would correctly identify the image about 20 percent of the time," said Gallant.
The researchers say the device cannot "read minds," the common term for unscrambling thoughts. It cannot even reconstruct an image, only identify an image that was taken from a known set, they point out. All the same, the potential is enormous, they believe.
Doctors could use the technique to diagnose brain areas damaged by a stroke or dementia, determine the outcome of drug treatment or stem-cell therapy and fling open a door into the strange world of dreams.
And, according to one futuristic scenario, paraplegic patients, by thinking of a series of images whose fMRI patterns are recognised by computer, may one day be able to operate machines by remote control. Even so, brain-reading is hedged with potential controversy.
Within 30 or 50 years, advances could raise fears about breach of privacy and authoritarian abuse of the kind that dog biotechnology today, the authors say. "No-one should be subjected to any form of brain-reading process involuntarily, covertly, or without complete informed consent," they say.
Although the two subjects were also investigators, there was no risk that the outcome of the test was skewed by suggestion or subliminal cues, co-researcher Kendrick Kay told AFP.
"Decoding performance was evaluated on a dataset that is completely independent of the one used to estimate the computational model," said Kay. "There is no plausible way that a subject could somehow make the evaluation dataset easier to decode by our computational algorithms."

Read Comments