Researchers bypass optic nerve, deliver images directly to blind woman's brain

Jimmy2x

Posts: 238   +29
Staff
Through the looking glass: A Spanish woman lost her vision due to a rapidly progressing condition affecting her optic nerves. With the help of researchers and a small electrode implanted in her visual cortex, she has taken the first steps toward restoring her vision... without using her eyes.

Berna Gomez's world was turned upside down when she was diagnosed with toxic optic neuropathy at age 42. The rapidly progressing disease deteriorated the Spanish science teacher's optic nerves and rendered her blind in a matter of days. Thanks to researchers from the University of Utah and Miguel Hernandez University in Spain, Gomez may now have a chance at restoring her functional vision.

The breakthrough was achieved using an implant known as the Moran|Cortivis Prosthesis. The device, which consists of 96 individual electrodes, is implanted directly in the patient's visual cortex. Once in place, the implant's electrodes can be stimulated in specific combinations to deliver "images" directly to the patient's mind.

According to the Journal of Clinical Investigation, the implant has successfully presented images ranging from spots of light and horizontal lines to some uppercase and lowercase letters.

The achievement is a huge step forward in the quest to restore vision. Unlike retinal implants, this specific advancement completely bypasses the recipient's optic nerve and delivers information directly to the brain's vision center. This direct stimulation provides the potential to deliver images to patients despite any conditions preventing their optic nerve from communicating with their brain.

The silicon-based microelectrode, known as the Utah Electrode Array, is not new technology. The roughly 4mm device's history stretches as far back as 2006, where it was the subject of a Defense Advanced Research Projects Agency (DARPA) collaboration with University of Utah researchers.

The study focused on developing and evaluating a peripheral nerve interface that would allow artificial limbs to move using only thought. In 2019 the University's biomedical engineering team successfully used the array in conjunction with a prosthetic arm to provide a patient with "feeling" via an artificial limb.

Image credit: Human brain from Robina Weermeijer, Berna Gomez from Moran Eye Center

Permalink to story.

 
The interesting part is looking at how she processes the signal and then compare it to how would a person who was born blind: they would have no concept or reference of what an image even is so how would the information be processed by them? Would their brain would just interpret this as something completely and utterly different meaning we just "learn" everything or would simulating the same parts of the brain would end up producing somewhat similar results so not being able to "see" but being able to process information on an instinctual level like get away out of the path of moving objects and such? Or just somewhere in between even.
 
The interesting part is looking at how she processes the signal and then compare it to how would a person who was born blind: they would have no concept or reference of what an image even is so how would the information be processed by them? Would their brain would just interpret this as something completely and utterly different meaning we just "learn" everything or would simulating the same parts of the brain would end up producing somewhat similar results so not being able to "see" but being able to process information on an instinctual level like get away out of the path of moving objects and such? Or just somewhere in between even.
The interesting part is looking at how she processes the signal and then compare it to how would a person who was born blind: they would have no concept or reference of what an image even is so how would the information be processed by them? Would their brain would just interpret this as something completely and utterly different meaning we just "learn" everything or would simulating the same parts of the brain would end up producing somewhat similar results so not being able to "see" but being able to process information on an instinctual level like get away out of the path of moving objects and such? Or just somewhere in between even.
Those are good questions
 
The interesting part is looking at how she processes the signal and then compare it to how would a person who was born blind: they would have no concept or reference of what an image even is so how would the information be processed by them? Would their brain would just interpret this as something completely and utterly different meaning we just "learn" everything or would simulating the same parts of the brain would end up producing somewhat similar results so not being able to "see" but being able to process information on an instinctual level like get away out of the path of moving objects and such? Or just somewhere in between even.

The human brain has unlimited capability to grasp complex ideas and concepts much like an infants brain grasps the idea's of light, dark, images, etc. While it would probably take somewhat longer, it certainly would not be any longer than a new born. Regardless of all that, this is certainly a very significant advancement in medicine and the use of other means to enhance it!
 
Every year I get an eye check. I've been going to the same place for decades.
Every time the doc comes in, I ask him if the bionic eye is ready. He laughs, says
no, and I say they must still be having trouble making that boop boop boop sound
like it did on the TV show. ;)
 
The interesting part is looking at how she processes the signal and then compare it to how would a person who was born blind: they would have no concept or reference of what an image even is so how would the information be processed by them? Would their brain would just interpret this as something completely and utterly different meaning we just "learn" everything or would simulating the same parts of the brain would end up producing somewhat similar results so not being able to "see" but being able to process information on an instinctual level like get away out of the path of moving objects and such? Or just somewhere in between even.

Deep learning organic style. Our brain hardwires circuits as we absorb information be it from any senses. 40% of brains processing is required for image processing. We should start with the brain of newborn infants and see how quickly neuronal circuits/synapses are growing/connecting as it vision develops.
 
Deep learning organic style. Our brain hardwires circuits as we absorb information be it from any senses. 40% of brains processing is required for image processing. We should start with the brain of newborn infants and see how quickly neuronal circuits/synapses are growing/connecting as it vision develops.
That's what I alluded to by "learning everything" but while thinking about this it dawn on me that beyond some of the more obvious autonomous functions like breathing, there's some that we've observed to be bypassing cognitive functions like moving your hand away from a fire or jumping out of the way or things that might look like dangerous creatures like snakes or big spiders or overall being startled.

From my extremely amateur recollection of it, those work as an in between: we learn to be fearful of certain things like spiders from our parents but once we "learn" that part it gets wired into a part of the brain that doesn't actually waits for the conscious part of the mind to process the information: we get startled and even if we learn to control it, there's the effects of adrenaline and the flight-or-fight response.

So while we "learn" what the visual information is there's certainly a lot of it that no longer operates by the normal channels of the brain so in theory, bypassing the learning and stimulating that process directly could allow a blind person not just to see but even if the image can't be processed or have no context, to respond accordingly to danger while still being technically blind assuming this process from the OP was partially successful only.
 
The interesting part is looking at how she processes the signal and then compare it to how would a person who was born blind: they would have no concept or reference of what an image even is so how would the information be processed by them? Would their brain would just interpret this as something completely and utterly different meaning we just "learn" everything or would simulating the same parts of the brain would end up producing somewhat similar results so not being able to "see" but being able to process information on an instinctual level like get away out of the path of moving objects and such? Or just somewhere in between even.
FWIW - Interestingly enough, to the best of my knowledge, the mind dreams with images - even in those born blind. https://www.wtamu.edu/~cbaird/sq/2020/02/11/do-blind-people-dream-in-visual-images/ What that says to your questions, I am not sure, however, to me, it suggests that the mind has an innate ability to process visual imagery.

I am willing to bet that those working on the tech have done some sort modeling on how the brain processes images, in fact, it would not surprise me if they have extensive fMRI scans of the parts of the brain that are responsible for processing image data. So, instead of relying on the brain having to reformulate its visual model for this device, it would not surprise me if they tried to make this device fit the model that the brain uses by default.

In fact, there has been reasonably successful research into recording the images seen by the brain while dreaming. https://www.discovermagazine.com/mi...rding-dreams-is-possiblescientists-are-trying
 
FWIW - Interestingly enough, to the best of my knowledge, the mind dreams with images - even in those born blind. https://www.wtamu.edu/~cbaird/sq/2020/02/11/do-blind-people-dream-in-visual-images/ What that says to your questions, I am not sure, however, to me, it suggests that the mind has an innate ability to process visual imagery.

I am willing to bet that those working on the tech have done some sort modeling on how the brain processes images, in fact, it would not surprise me if they have extensive fMRI scans of the parts of the brain that are responsible for processing image data. So, instead of relying on the brain having to reformulate its visual model for this device, it would not surprise me if they tried to make this device fit the model that the brain uses by default.

In fact, there has been reasonably successful research into recording the images seen by the brain while dreaming. https://www.discovermagazine.com/mi...rding-dreams-is-possiblescientists-are-trying
Fascinating.

Makes sense now thinking about it since we developed functioning eyes, at least in some fashion, way before we developed cognitive capabilities so it makes sense that there is no need to bypass processing images since that came later during our evolution.

It might even represent a unique challenge or way to solve challenges when it comes to getting closer to true AI.
 
Back