AIRLINK 217.98 Decreased By ▼ -4.91 (-2.2%)
BOP 10.93 Increased By ▲ 0.11 (1.02%)
CNERGY 7.55 Decreased By ▼ -0.01 (-0.13%)
FCCL 34.83 Decreased By ▼ -2.24 (-6.04%)
FFL 19.32 Increased By ▲ 0.08 (0.42%)
FLYNG 25.15 Decreased By ▼ -1.89 (-6.99%)
HUBC 131.09 Decreased By ▼ -1.55 (-1.17%)
HUMNL 14.56 Decreased By ▼ -0.17 (-1.15%)
KEL 5.18 Decreased By ▼ -0.22 (-4.07%)
KOSM 7.36 Decreased By ▼ -0.12 (-1.6%)
MLCF 45.63 Decreased By ▼ -2.55 (-5.29%)
OGDC 222.08 Decreased By ▼ -1.18 (-0.53%)
PACE 8.16 Decreased By ▼ -0.02 (-0.24%)
PAEL 44.19 Increased By ▲ 0.69 (1.59%)
PIAHCLA 17.69 Decreased By ▼ -0.37 (-2.05%)
PIBTL 8.97 Decreased By ▼ -0.10 (-1.1%)
POWERPS 12.51 Decreased By ▼ -0.50 (-3.84%)
PPL 193.01 Decreased By ▼ -5.23 (-2.64%)
PRL 43.17 Increased By ▲ 0.93 (2.2%)
PTC 26.63 Decreased By ▼ -0.76 (-2.77%)
SEARL 107.08 Decreased By ▼ -3.00 (-2.73%)
SILK 1.04 Decreased By ▼ -0.02 (-1.89%)
SSGC 45.00 Decreased By ▼ -2.30 (-4.86%)
SYM 21.19 Increased By ▲ 0.42 (2.02%)
TELE 10.15 Decreased By ▼ -0.37 (-3.52%)
TPLP 14.51 Decreased By ▼ -0.44 (-2.94%)
TRG 67.28 Decreased By ▼ -1.57 (-2.28%)
WAVESAPP 11.29 Decreased By ▼ -0.63 (-5.29%)
WTL 1.70 Decreased By ▼ -0.09 (-5.03%)
YOUW 4.25 Decreased By ▼ -0.10 (-2.3%)
BR100 12,397 Increased By 33.3 (0.27%)
BR30 37,347 Decreased By -871.2 (-2.28%)
KSE100 117,587 Increased By 467.3 (0.4%)
KSE30 37,065 Increased By 128 (0.35%)

PARIS: Scientists said Monday they have found a way to use brain scans and artificial intelligence modelling to transcribe “the gist” of what people are thinking, in what was described as a step towards mind reading.

While the main goal of the language decoder is to help people who have lost the ability to communicate, the US scientists acknowledged that the technology raised questions about “mental privacy”.

Aiming to assuage such fears, they ran tests showing that their decoder could not be used on anyone who had not allowed it to be trained on their brain activity over long hours inside a functional magnetic resonance imaging (fMRI) scanner.

Previous research has shown that a brain implant can enable people who can no longer speak or type to spell out words or even sentences.

These “brain-computer interfaces” focus on the part of the brain that controls the mouth when it tries to form words.

Alexander Huth, a neuroscientist at the University of Texas at Austin and co-author of a new study, said that his team’s language decoder “works at a very different level”.

“Our system really works at the level of ideas, of semantics, of meaning,” Huth told an online press conference.

It is the first system to be able to reconstruct continuous language without an invasive brain implant, according to the study in the journal Nature Neuroscience.

For the study, three people spent a total of 16 hours inside an fMRI machine listening to spoken narrative stories, mostly podcasts such as the New York Times’ Modern Love.

This allowed the researchers to map out how words, phrases and meanings prompted responses in the regions of the brain known to process language.

They fed this data into a neural network language model that uses GPT-1, the predecessor of the AI technology later deployed in the hugely popular ChatGPT.

The model was trained to predict how each person’s brain would respond to perceived speech, then narrow down the options until it found the closest response.

To test the model’s accuracy, each participant then listened to a new story in the fMRI machine.

The study’s first author Jerry Tang said the decoder could “recover the gist of what the user was hearing”.

For example, when the participant heard the phrase “I don’t have my driver’s license yet”, the model came back with “she has not even started to learn to drive yet”.

The decoder struggled with personal pronouns such as “I” or “she,” the researchers admitted.

But even when the participants thought up their own stories — or viewed silent movies — the decoder was still able to grasp the “gist,” they said.

This showed that “we are decoding something that is deeper than language, then converting it into language,” Huth said.

Because fMRI scanning is too slow to capture individual words, it collects a “mishmash, an agglomeration of information over a few seconds,” Huth said.

Comments

Comments are closed.