Machine learning allows researchers to decode visual cortex signals into video frames

Cal Jeffrey

Posts: 4,179   +1,426
Staff member
Through the looking glass: If this week's science news is anything to go by, it won't be long before Big Brother is peering inside our heads. Coming on the heels of US scientists revealing a GPT model that decodes human thoughts into words, Swiss researchers demonstrated a machine-learning model that translates neural activity in mice to video.

Researchers at École Polytechnique Fédérale de Lausanne (EPFL) developed a machine-learning algorithm called CEBRA – pronounced zebra but short for Consistent EmBeddings of high-dimensional Recordings using Auxiliary variables. In layman's terms, CEBRA is a model that can decode images from a mouse's brain.

The team has worked on the project for over ten years with early breakthroughs in decoding rudimentary shapes from brainwave activity in humans and animals. Now with the help of advanced machine learning, EPFL scientists have deciphered entire movie clips from the thought patterns of mice.

In its experiments, the team used measurements from two types of mice – those with electrodes inserted into their visual cortex, and a set of genetically modified mice whose neurons glow green when active. Some mice were shown a black and white clip of a man running to a car and retrieving something from the trunk. The data from this subset of mice was used to train CEBRA to correlate brain activity with each video frame.

A second group was shown the same movie while CEBRA processed the brain activity. Aside from some stuttering, likely caused by the mice moving around and not paying full attention, the decoded video matched the actual clip.

Does this mean we're getting close to tech that can project a person's memories or dreams onto a movie screen or computer monitor? Not exactly. Remember that CEBRA could only do this because it was trained with the video clip in advance. While it is not hard to imagine the model advancing enough to recognize images without the specific pretraining seen here, this is far out of reach for now.

The scientists see their breakthrough more as a new research tool. They say CEBRA will provide insights into neurological functions and how the brain interprets stimuli. They believe it will prove useful in helping diagnose and treat brain disorders like Alzheimer's or Tourette syndrome.

That said, other research coming out this week has revealed that we are closer to reading thoughts with machinery than we have ever been. Scientists at the University of Texas, Austin developed a GPT model that analyzes brain activity in fMRI scans and decodes them into words with fairly amazing accuracy. The tech is far from perfect, but it is the first time that a machine has been able to decode thoughts into complex verbal descriptions rather than single words or very short phrases.

You can check out a pre-print version of the CEBRA study at Cornell University's arXiv. They also posted the software files to GitHub for other neuroscientists to use.

Permalink to story.

 
It's guessing what frame is being watch and not decoding what the rat actually sees. The stutter shows the inaccuracy of the process. We are FAR from reading people's thought.
 
Back