Interested in learning what's next for the gaming industry? Join gaming executives to discuss emerging parts of the industry this October at GamesBeat Summit Next. Learn more.
Somewhat unceremoniously, Facebook this week provided an update on its brain-computer interface project, preliminary plans for which it unveiled at its F8 developer conference in 2017. In a paper published in the journal Nature Communications, a team of scientists at the University of California, San Francisco backed by Facebook Reality Labs — Facebook’s Pittsburgh-based division devoted to augmented reality and virtual reality R&D — described a prototypical system capable of reading and decoding study subjects’ brain activity while they speak.
It’s impressive no matter how you slice it: The researchers managed to make out full, spoken words and phrases in real time. Study participants (who were prepping for epilepsy surgery) had a patch of electrodes placed on the surface of their brains, which employed a technique called electrocorticography (ECoG) — the direct recording of electrical potentials associated with activity from the cerebral cortex — to derive rich insights. A set of machine learning algorithms equipped with phonological speech models learned to decode specific speech sounds from the data and to distinguish between questions and responses.
But caveats abound. The system was highly invasive, for one, and it differentiated only among two dozen standard answers to nine questions with 61% accuracy and questions with 75% accuracy. Moreover, it fell far short of Facebook’s real-time decoding speed goal of 100 words per minute with a 1,000-word vocabulary and word error rate of less than 17%.
So what does that say about the state of brain-computer interfaces? And perhaps more importantly, is Facebook’s effort really on the cutting edge?
Neuralink
Elon Musk’s Neuralink, a San Francisco startup founded in 2017 with $158 million in funding (including at least $100 million from Musk), is similarly pursuing brain-machine interfaces that connect humans with computers. During an event earlier this month timed to coincide with the publication of a whitepaper, Neuralink claimed that the prototypes it’s developed are capable of extracting information from many neurons at once using flexible wires inserted into soft tissue by a “sewing machine.”
Electrodes on those wires relay detected pulses to a processor placed on the surface of the skull that’s able to read information from up to 1,536 channels, or roughly 15 times better than current systems embedded in humans. It’s already been tested in mice and primates, and Neuralink hopes to launch human trials with what it calls the N1, a cylinder roughly 8 millimeters in diameter and 4 millimeters tall that can take 20,000 samples per second with 10 bits of resolution from up to 1,024 electrodes. That’s equivalent to about 200Mbps of neural data for each channel.
Neuralink’s solution is no less invasive than Facebook’s in this respect; its team expects that in the near term, wires will have to be embedded in the brain’s motor areas and somatic sensory area under a trained surgeon’s guidance. And while its neuron-reading techniques and technology are state-of-the-art, Neuralink appears to have made less progress on the interpretability side. One of its aspirational goals is to allow a tetraplegic to type at 40 words per minute, according to Musk.
Paradromics and Kernel
Three-year-old Austin-based Paradromics, like Neuralink, is actively developing an implantable brain-reading chip with substantial seed backing and $18 million from the U.S. Department of Defense’s Neural Engineering System Design program.
The company’s proprietary Neural Input-Output Bus, or NIOB for short, packs 50,000 modular microwires that can interface with and stimulate up to 1 million neurons, from which it can record up to 30Gbps of neural activity. It’s currently in preclinical development, and it’s expected to enter human trials in 2021 or 2022, laying the groundwork for a solution to help stroke victims relearn to speak.
As with Paradromics, Kernel, which launched in 2016 with $100 million in backing from Braintree founder and CEO Bryan Johnson, is currently focused on developing a surgically implanted neural chip. But the company ambitiously claims its tech will someday “mimic, repair, and improve” human cognition using AI, and it recently began investigating non-invasive interfaces.
It’s not as far-fetched as it sounds — there’s been recent progress to this end. In a recent study published in the journal Nature, scientists trained a machine learning algorithm on data recorded from previous experiments to determine how movements of the tongue, lips, jaw, and larynx created sound; incorporated this into a decoder, which transformed brain signals into estimated movements of the vocal tract; and fed them to a separate component that turned the movements into synthetic speech.
BrainGate
Cyberkinetics developed in partnership with researchers in the Department of Neuroscience at Brown University a brain implant system — BrainGate — that’s designed to help those who have lost control of their limbs or other bodily functions. It consists of a 100-electrode microelectrode array implanted in the brain that can sense the electromagnetic signature of neurons firing, and an external decoder peripheral that connects to a prosthetic or storage device.
Clinical trials began in 2009 under the name BrainGate2. And in May 2012, BrainGate researchers published a study in Nature demonstrating that two people paralyzed by brainstem stroke several years earlier were able to control robotic arms for reaching and grasping.
Ctrl-labs
New York startup Ctrl-labs is taking a slightly different, less invasive approach to translating neural impulses into digital signals. Its developer platform Ctrl-kit taps differential electromyography (EMG) to translate mental intent into action, specifically by measuring changes in electrical potential caused by impulses traveling from the brain to hand muscles. Sixteen electrodes monitor the motor neuron signals amplified by the muscle fibers of motor units, from which they measure signals, and with the help of AI algorithms distinguish between the individual pulses of each nerve.
The system works independently of muscle movement; generating a brain activity pattern that Ctrl-labs’ tech can detect requires no more than the firing of a neuron down an axon, or what neuroscientists call action potential. That puts it a class above wearables that use electroencephalography (EEG), a technique that measures electrical activity in the brain through contacts pressed against the scalp. EMG devices draw from the cleaner, clearer signals from motor neurons, and as a result are limited only by the accuracy of the software’s machine learning model and the snugness of the contacts against the skin.
On the software side of the equation, the accompanying SDK contains JavaScript and TypeScript toolchains and prebuilt demos that give an idea of the hardware’s capabilities. So far, Ctrl-labs has demonstrated a virtual keyboard that maps finger movements to PC inputs, allowing a wearer to type messages by tapping on a tabletop with their fingertips. It’s also shown off robotic arms mapped to the Ctrl-kit’s outputs that respond to muscle movements.
Challenges ahead
High-resolution brain-machine interfaces, or BCI for short, are predictably complicated — they must be able to read neural activity to pick out which groups of neurons are performing which tasks. Historically, hardware limitations have caused them to come into contact with more than one region of the brain or produce interfering scar tissue.
That has changed with the advent of fine biocompatible electrodes, which limit scarring and can target cell clusters with precision, and with noninvasive peripherals like Ctrl-kit. What hasn’t changed is a lack of understanding about certain neural processes.
Rarely is activity isolated in brain regions, such as the prefrontal lobe and hippocampus. Instead, it takes place across various brain regions, making it difficult to pin down. Then there’s the matter of translating neural electrical impulses into machine-readable information; researchers have yet to crack the brain’s encoding. Pulses from the visual center aren’t like those produced when formulating speech, and it’s sometimes difficult to identify signals’ origination points.
The challenges haven’t discouraged Facebook, Neuralink, Paradromics, Kernel, Ctrl-labs, and others from chasing after a brain-computer interface market that’s anticipated to be worth $1.46 billion by 2020, according to Allied Market Research. One thing’s for sure: They’ve got an uphill climb ahead of them.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.