X

Image courtesy Kite & Lightning

Digital Frontier: Where Brain-computer Interfaces & AR/VR Could One Day Meet

Whenever I used to think about brain-computer interfaces (BCI), I typically imagined a world where the Internet was served up directly to my mind through cyborg-style neural implants—or basically how it’s portrayed in Ghost in the Shell. In that world, you can read, write, and speak to others without needing to lift a finger or open your mouth. It sounds fantastical, but the more I learn about BCI, the more I’ve come to realize that this wish list of functions is really only the tip of the iceberg. And when AR and VR converge with the consumer-ready BCI of the future, the world will be much stranger than fiction.

Be it Elon Musk’s latest company Neuralink—which is creating “minimally invasive” neural implants to suit a wide range of potential future applications, or Facebook directly funding research on decoding speech from the human brain—BCI seems to be taking an important step forward in its maturity. And while these well-funded companies can only push the technology forward for its use as a medical devices today thanks to regulatory hoops governing implants and their relative safety, eventually the technology will get to a point when it’s both safe and cheap enough to land into the brainpan’s of neurotypical consumers.

Although there’s really no telling when you or I will be able to pop into an office for an outpatient implant procedure (much like how corrective laser eye surgery is done today), we know at least that this particular future will undoubtedly come alongside significant advances in augmented and virtual reality. But before we consider where that future might lead us, let’s take a look at where things are today.

Noninvasive Baby Steps

BCI and AR/VR have already converged, albeit on a pretty small scale and to little appreciable effect so far in terms of the wider AR/VR usership. Early startups like Neurable are already staking their plot, basing their work on the portable and noninvasive method of electroencephalography (EEG), which reads voltage fluctuations in the brain from outside the skull.

Image courtesy Neurable

In terms of brain-computer interfaces, EEG is the oldest and one of the lowest ‘resolution’ methods of tuning into the brain’s constant flow of ‘action potentials’, the neuron-to-neuron pulses that form the foundation of thought, perception, action, and, well… everything.

According to Valve’s resident experimental psychologist Mike Ambinder, who held a talk on the state of BCIs and game design at GDC 2019 earlier this year, using EEG is tantamount to sitting outside of a football stadium and trying to figure out what’s happening on the field just by listening to the intensity of the crowd’s reaction; EEG can only reliably measure neural activity that occurs at the most upper layers of the brain.

Although EEG can provide a good starting point for some early data collection, Ambinder maintains, a trip underneath the skull is needed to order to derive deeper, more useful knowledge which in turn should allow for more immersive and adaptive games in the future.

There are some other non-invasive methods for viewing the brain, such as magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI), however these haven’t (and likely won’t for some time) make their way out of hospitals and research facilities due to their massive size, power requirements, and price tags—precisely the problems implants plan to tackle.

Implants Galore

Neuralink is betting that its first generation device, the N1 sensor, will provide the sort of real world benefits its immediate target audience is looking for: a home-operated, low-latency, and high bandwidth method of basic input to computers and smartphones, enabling things like text input and simple operations—an ideal solution for those without use of their limbs.

The company’s furthest-reaching vision shared at its company’s unveiling last month however teases out a future of a mesh-like device called a ‘neural lace’ that could potentially have greater access to the brain by being injected into its capillaries.

This, the company hopes, could one day give users the ability to bypass our senses to virtually simulate vision, touch, taste, and effectively our entire perception. As the company’s founder and largest investor, Elon Musk, puts it, Neuralink’s mission is to eventually “achieve a symbiosis with artificial intelligence” through ‘full-brain’ BCI.

There’s of course no time frame available on Neuralink’s prophetic AI-merging neural lace; Musk himself says that while the N1 sensor should start in-human clinical studies in 2020, achieving full BCI will be a long, iterative process.

Adam Marblestone, a research scientist at Google DeepMind and PhD in biophysics from Harvard, isn’t so starry-eyed about the initial launch of Neuralink’s N1 tech though. Putting the company’s advances into perspective, Marblestone says in a recent tweet that although Neuralink is accelerating the current state of the technology, it’s not a magic bullet.

“They are climbing Everest with bigger team/better gear (engineering). What is really needed is a helicopter (science-intensive breakthrough),” Marblestone writes.

BCI might seem like new-fangled tech, but research has been in the works for longer than you might expect. In 1997, a bio engineering professor at the University of Utah named Dr. Richard Norman developed the ‘Utah Array’, an implant with 256 electrodes designed to rigidly attach to the brain. In fact, the Utah Array is still in production in various forms by Blackrock Microsystems, and has been instrumental in gathering neural recordings of action potentials over the past 20 years.

In contrast, Neuralink promises to deliver “as many as 3,072 electrodes per array distributed across 96 threads,” according to the company’s white paper, not to mention the added benefit of less-invasive, flexible threads designed to cause less inflammation than rigid electrodes. A detachable receiver would also power the array of smaller sensors, and transmit data to computers via Bluetooth.

Image courtesy Neuralink

There’s more than one way to skin a cat though, and the same is true for establishing what you might call ‘read & write’ capabilities of interacting with neurons, or the ability to both measure what’s happening in your brain and stimulate it too.

Besides the N1 sensor, an implant called ‘Neural Dust’ could also offer a window into the mind. The millimeter-scale implants are passive, wireless, and battery-less, and promise to provide what a UC Berkeley research group calls in a recent paper “high-fidelity transmission” of data obtained from muscles or neurons depending on where they’re implanted.

neural dust, Image courtesy University of California, Berkeley

Notably, a co-author on that particular paper is Dongjin Seo, Neuralink’s director of implant systems, so it’s possible we’ll see some further work in that area under Neuralink.

Another interesting method used in a recently published research paper by a group of scientists from Stanford University, Boulder Nonlinear Systems, and the University of Tokyo, deals with a method of optogenetic stimulation. Essentially, it’s a technique of firing light into the visual cortex of a brain, which was altered to include light-reactive proteins. The researchers were able to write neural activity into dozens of single neurons in a mouse and simultaneously read the impact of this stimulation across hundreds of nearby neurons. The end goal was to see whether they could inject specific images into the mouse’s visual field that weren’t really there. It’s rudimentary, but it works.

Admittedly, it’s not a case of using an implant per se, but it’s these early steps that will unlock behavioral patterns of neurons and allow scientists to more precisely manipulate the brain rather than just observing it. Once things get further along, BCI may well be the perfect complement to immersive AR and VR.

Read & Write: The Immersive Future

It’s 2030. All-in-one AR/VR glasses are a reality. They’re thin and light enough to be worn outdoors for when you need to navigate the city, or battle with your friends at the park in the wildest game of capture the flag you could ever imagine. When you want a more all-encompassing experience, you can simply switch the headset to VR mode and you’re instantly in a massively multiplayer virtual world.

According to Facebook Reality Labs’ chief scientist Michael Abrash, a device like this is on the horizon, and he expects it to come sometime in the next decade.

It’s all great, certainly better than it used to be when the headsets first came out in 2016. Now AR/VR headsets have near perfect eye-tracking, on-board AI that does object recognition and convincingly speaks to you like a personal assistant. The headset has an eye-tracking based UI that sometimes feels like magic. You’re still mostly using hand gestures to do things in AR though, and you still rely on controllers in VR for the best at-home experience. Going back to a headsets from a decade earlier is pretty unthinkable by now.

Concept image from Oculus Connect 5 showing what a waveguide-based VR headset may look like in the near future, Image courtesy Facebook

Besides location-based AR/VR parks like The VOID, most consumers still don’t own haptic suits because they’re expensive, and although they can simulate hot and cold through thermoelectric coolers embedded in the fabric, it still only provides a few well-placed thumps and buzzes from the same sort of haptic tech you find in smartphones today—not really practical to wear, and not additive enough to buy for at-home play.

At the same time, two generations of smaller, higher-performing neural implants have made their way into production. Implants, once requiring major surgery, are now an outpatient procedure thanks to AI-assisted robotic surgery. These teams, which are backed by the big names in tech, are working to bring BCI to the consumer market, but none so far have been officially approved by the FDA for neurotypical users. The latest model, which is still technically for medical use, has gotten really good though, and offers a few benefits that clearly are targeted at enticing would-be users to shop around for doctors that are willing to fudge a diagnosis. Some countries have more lax rules, and the most adventurous with a few thousand to burn are the first to sign up.

With the implant, you can not only ‘type’ ten times faster and search the Web at the speed of thought, but you can listen to music without headphones, remotely voice chat with others without physically speaking, and navigate the UI with only your thoughts. Soon enough, special interest lobbies do their thing, Big Business does its thing, and somehow the first elective consumer BCI implant becomes legal, allowing for a host of others to slide in behind it.

This opens up a whole new world of game design, and menus basically become a thing of the past, as games become reactive not only to your abilities as a player, but to your unseen, unspoken reactions to the challenges laid out before you (e.g anger, delight, surprise, boredom, etc.) Game designers now have a wealth of information to sift through, and have to completely rethink the sort of systems they have to build in order to leverage this new data. It’s a paradigm shift that reminds the long-time AR/VR developers of ‘the good old days’, back when games didn’t need to rely on always-connected AI services for passable NPC interactions.

Now imagine a further future. The glasses are sitting on your trophy shelf of historical VR and AR headsets gathering dust. Your neural implant is no longer a series of chips pockmarking your skull. You had those painlessly removed right before your most recent procedure. A supple lattice coats the surface of your brain, and delivers strategic stimulus to a carefully planned network of neurons. It has access to a large portion of your brain. The glasses aren’t needed any more because digital imagery is injected directly into your visual cortex. You can feel wet grass on your feet, and smell pine needles from an unexplored forest in front of you.

All of this is plausible given what we know today. As our understanding of the brain becomes more complete, and the benefits of having a neural implant begin to outweigh the apparent risks, brain-computer interfaces are poised to converge and then merge with AR/VR at some point. The timescale may be uncertain at this early date, and the risks not fully understood before many jump into the next phase of the digital revolution, but we can leave that can of worms for another article.

Related Posts
Disqus Comments Loading...