Scientists have developed a GPT model that reads human thoughts

Cal Jeffrey

Posts: 4,181   +1,427
Staff member
In context: Generative pre-trained transformers (GPT) like those used in OpenAI's ChatGPT chatbot and Dall-E image generator are the current trend in AI research. Everybody wants to apply GPT models to just about everything, and it has raised considerable controversy for various reasons.

Scientific American notes that a group of researchers has developed a GPT model that can read a human's mind. The program is not dissimilar to ChatGPT in that it can generate coherent, continuous language from a prompt. The main difference is that the prompt is human brain activity.

The team from the University of Texas at Austin just published its study in Nature Neuroscience on Monday. The method uses imaging from an fMRI machine to interpret what the subject is "hearing, saying, or imagining." The scientists call the technique "non-invasive," which is ironic since reading someone's thoughts is about as invasive as you can get.

However, the team means that its method is not medically invasive. It is not the only time scientists have developed a technology that can read thoughts, but it is the only successful method that does not require electrodes connected to the subject's brain.

The model, unimaginatively dubbed GPT-1, is the only method that interprets brain activity in a continuous language format. Other techniques can spit out a word or short phrase, but GPT-1 can form complex descriptions that explain the gist of what the subject is thinking.

For example, one participant listened to a recording of someone stating, "I don't have my driver's license yet." The language model interpreted the fMRI imaging as meaning, "She has not even started to learn to drive yet." So while it does not read the person's thoughts verbatim, it can get a general idea and summarize it.

Invasive methods can interpret exact words because they are trained to recognize specific physical motor functions in the brain, such as the lips moving to form a word. The GPT-1 model determines its output based on blood flow in the brain. It can't precisely repeat thoughts because it works on a higher level of neurological functioning.

"Our system works at a very different level," said Assistant Professor Alexander Huth from UT Austin's Neuroscience and Computer Science Center at a press briefing last Thursday. "Instead of looking at this low-level motor thing, our system really works at the level of ideas, of semantics, and of meaning. That's what it's getting at."

Also read: Leading tech minds sign open letter asking for a six-month pause on advanced AI development

The breakthrough came after feeding GPT-1 Reddit comments and "autobiographical" accounts. Then they trained it on the scans from three volunteers who spent 16 hours each listening to recorded stories while in the fMRI machine. This allowed GPT-1 to link the neural activity to the words and ideas in the recordings.

Once trained, the volunteers listened to new stories while being scanned, and GPT-1 accurately determined the general idea of what the participants were hearing. The study also used silent movies and the volunteers' imaginations to test the technology with similar results.

Interestingly, GPT-1 was more accurate when interpreting the audio-recording sessions than the participants' made-up stories. One could chalk it up to the abstract nature of imagined thoughts versus the more concrete ideas formed from listening to something. That said, GPT-1 was still pretty close when reading unspoken thoughts.

In one example, the subject imagined, "[I] went on a dirt road through a field of wheat and over a stream and by some log buildings." The model interpreted this as "He had to walk across a bridge to the other side and a very large building in the distance." So it missed some arguably essential details and vital context but still grasped elements of the person's thinking.

Machines that can read thoughts might be the most controversial form of GPT tech yet. While the team envisions the technology helping ALS or aphasia patients speak, it acknowledges its potential for misuse. It requires the subject's consent to operate in its current form, but the study admits that bad actors could create a version that overrides that check.

"Our privacy analysis suggests that subject cooperation is currently required both to train and to apply the decoder," it reads. "However, future developments might enable decoders to bypass these requirements. Moreover, even if decoder predictions are inaccurate without subject cooperation, they could be intentionally misinterpreted for malicious purposes. For these and other unforeseen reasons, it is critical to raise awareness of the risks of brain decoding technology and enact policies that protect each person's mental privacy."

Of course, this scenario assumes that fMRI tech can be miniaturized enough to be practical outside of a clinical setting. Any applications other than research are still a long way off.

Permalink to story.

[/I]
 
If suddenly all human thought was hearable to everyone at all times, humanity would not survive.
 
If suddenly all human thought was hearable to everyone at all times, humanity would not survive.
According to the article, people can only read others minds after learned their brains via fMRI machine first so we are safe for now.

However, It doesn't stop people from doing it in small scale. I wonder if it can be used to steal information...tie a person and put him to a fMRI machine force him to listen to key words in order to train a model specific to him. Keep asking questions to activate the memory of the person and get information out of it.
 
This is nothing. Best GPT model will be the one which will replace all greedy CEOs and executives cause they are the most expensive and the less procuctive.
 
If suddenly all human thought was hearable to everyone at all times, humanity would not survive.
Sounds like the beginning of the plot to The Invention of Lying. Idk total honestly could solve a lot of issues. It would be hard for corrupt government workers to continue their paths forward. The fact that others could hear all our thoughts could even cause people to avoid making bad decisions (they may only make those decisions if they could hide them).
 
Another WEF great reset "prediction" (plan), there will be no privacy in 2030. People, it's time to wake up if you don't want to live in the perfect dictatorship.
 
And this is why everyone needs to buy Meta Quest 5 when available. It will enable AI to constantly read our minds, so we don't need those clumsy controllers to play the games. We'll play them with our thoughts.

As a side effect, our other thoughts will also be sent to the server for analysis. Where another AI will detect those nasty ones, such as: "I don't want to take the 7th shot against whichever pandemic disease is popular now". Such unwanted thoughts will result in fully automatic deduction of the penalty fee from your banking account. No human action needed.
 
What a crock. People will believe anything. Now when someone says that's a micro aggression they will say they have proof. Modern day snake oil
 
Everything shown in the movies so far seems to becoming true. It's only a matter of time.

Your freedom and rights has a price as you don't get something for nothing, at least for the coming future.
 
Imagine if they put ai in our heads, so that the AI controls what we can and cant think. hard to imagine.
 
Make no mistake. If any of these horrible ideas ever become possible, someone will try to use them. Humans have long ago become the product, rather than the consumer.
 
Back