New ChatGPT, Bard like AI tool to turn peoples' thoughts into text

The system, developed by a team at the University of Texas at Austin relies in part on a transformer model, similar to the ones that power Open AI's ChatGPT and Google's Bard. It might help people who are mentally conscious yet unable to physically speak, such as those debilitated by strokes, to communicate intelligibly again, according to the team who published the study in the journal Nature Neuroscience.

US scientists have developed a new artificial intelligence (AI) system that can translate a person's brain activity -- while listening to a story or silently imagining telling a story -- into a continuous stream of text.

The system, developed by a team at the University of Texas at Austin relies in part on a transformer model, similar to the ones that power Open AI's ChatGPT and Google's Bard.

Advertisement

It might help people who are mentally conscious yet unable to physically speak, such as those debilitated by strokes, to communicate intelligibly again, according to the team who published the study in the journal Nature Neuroscience.

Unlike other language decoding systems in development, this system called semantic decoder does not require subjects to have surgical implants, making the process noninvasive. Participants also do not need to use only words from a prescribed list.

Advertisement

Brain activity is measured using an functional MRI scanner after extensive training of the decoder, in which the individual listens to hours of podcasts in the scanner.

Also read | Microsoft plans to demo its new ChatGPT-like AI in MS Office

Advertisement

Later, provided that the participant is open to having their thoughts decoded, their listening to a new story or imagining telling a story allows the machine to generate corresponding text from brain activity alone.

"For a noninvasive method, this is a real leap forward compared to what's been done before, which is typically single words or short sentences," said Alex Huth, an assistant professor of neuroscience and computer science at UT Austin.

Advertisement

"We're getting the model to decode continuous language for extended periods of time with complicated ideas," he added.

The result is not a word-for-word transcript. Instead, researchers designed it to capture the gist of what is being said or thought, albeit imperfectly. About half the time, when the decoder has been trained to monitor a participant's brain activity, the machine produces text that closely (and sometimes precisely) matches the intended meanings of the original words.

Advertisement

For example, in experiments, a participant listening to a speaker say: "I don't have my driver's licence yeta had their thoughts translated as, "She has not even started to learn to drive yet."

The team also addressed questions about potential misuse of the technology in the study. The paper describes how decoding worked only with cooperative participants who had participated willingly in training the decoder.

Advertisement

Results for individuals on whom the decoder had not been trained were unintelligible, and if participants on whom the decoder had been trained later put up resistance -- for example, by thinking other thoughts -- results were similarly unusable.

Also read | ChatGPT shows better empathy to patients than doctors: Study

Advertisement

"We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that," said Jerry Tang, a doctoral student in computer science. "We want to make sure people only use these types of technologies when they want to and that it helps them."

In addition to having participants listen or think about stories, the researchers asked subjects to watch four short, silent videos while in the scanner. The semantic decoder was able to use their brain activity to accurately describe certain events from the videos.

Advertisement

The system currently is not practical for use outside of the laboratory because of its reliance on the time needed on an fMRI machine. But the researchers think this work could transfer to other, more portable brain-imaging systems, such as functional near-infrared spectroscopy (fNIRS).

Advertisement