A major goal of the field of neuroprosthetics has focused on improving the lives of paralyzed patients by restoring their lost real-world abilities.
One example was the 2012 work by neuroscientists Leigh Hochberg and John Donoghue at Brown University. Their team trained two people with long-standing paralysis—a 58-year-old woman and a 66-year-old man—to use a brain-machine interface (BMI) which decoded signals from their motor cortex to direct a robotic arm to reach for and grasp objects. One subject was able to pick up and drink from a bottle using the device.
More recently, in 2017, a French team from Grenoble University Hospital surgically implanted an epidural wireless brain-machine interface in a 28-year-old-man with tetraplegia. After two years of training, the patient was able to control some exoskeleton functions using his brain activity alone.
From advanced robotics to the delicate reinnervation of the damaged peripheral nerves in patients’ arms and legs, these projects require extraordinary medical and technological breakthroughs. Extensive development is still needed to realize real-world clinical applications of these approaches.
However, fully mastering the brain-computer interface itself—the precise translation of a brain signal into an intended action—may require a much simpler, cheaper, and safer technology: virtual reality. Indeed, in many BMI projects, the initial training is based on virtual simulations: For instance, before attempting to control a real robotic arm, subjects first learn to control a virtual one.
As the gaming world and the metaverse evolve, the next big breakthroughs in BMI applications will arrive in the virtual world before they are realized in the real one. This was already shown to be possible by a team of researchers at Johns Hopkins who were able to teach a paralyzed patient to fly a warplane in a computer flight-simulation using a BMI. According to their report, “From the subject’s perspective, this was one of the most exciting and entertaining experiments she had performed.”
In 2023, we will see many more applications of BMI that will allow disabled people to fully participate in virtual worlds. Initially, by taking part in simpler interactive communication spaces like chat rooms; later, by fully controlling 3D avatars in virtual spaces where they can shop, socially interact, or even play games.
This applies to my own work at UC San Francisco, where we are building BMIs to restore speech communication. We can already train patients to communicate via text-based chat and messaging, in real time. Our next goal is now to achieve speech synthesis in real time. We previously showed it is possible to do offline with good accuracy, but doing it in real time is a new challenge in paralyzed patients.
We’re now expanding our work to include the ability to control facial avatars, which will enrich virtual social interactions. Seeing the mouth and lips move when someone is speaking greatly enhances speech perception and comprehension. The brain areas controlling the vocal tract and mouth also overlap with the areas responsible for nonverbal facial expressions, so facial avatars will also be able to express them more fully.
As virtual reality and BMI’s converge, it is no coincidence that tech companies are also developing consumer applications for neural interfaces, both noninvasive and invasive—needless to say, these advances will have big implications for all of us, not only in how we interact with computers, but how we interact with each other.
For paralyzed patients, however, the implication is much more fundamental—it’s about their ability to participate in social life. One of the most devastating aspects of paralysis is social isolation. However, as human social interactions are increasingly based on digital formats—like texting and email—and virtual environments, we now have an opportunity that has never existed before. With brain-machine interfaces, we can finally address this unmet need.
Fumali – Services Marketplace – Listings, Bookings & Reviews