in

'Deepfaking the mind' could improve brain-computer interfaces for people with disabilities

Researchers at the USC Viterbi School of Engineering are using generative adversarial networks (GANs) — technology best known for creating deepfake videos and photorealistic human faces — to improve brain-computer interfaces for people with disabilities.

In a paper published in Nature Biomedical Engineering, the team successfully taught an AI to generate synthetic brain activity data. The data, specifically neural signals called spike trains, can be fed into machine-learning algorithms to improve the usability of brain-computer interfaces (BCI).

BCI systems work by analyzing a person’s brain signals and translating that neural activity into commands, allowing the user to control digital devices like computer cursors using only their thoughts. These devices can improve quality of life for people with motor dysfunction or paralysis, even those struggling with locked-in syndrome — when a person is fully conscious but unable to move or communicate.

Various forms of BCI are already available, from caps that measure brain signals to devices implanted in brain tissues. New use cases are being identified all the time, from neurorehabilitation to treating depression. But despite all of this promise, it has proved challenging to make these systems fast and robust enough for the real world.

Specifically, to make sense of their inputs, BCIs need huge amounts of neural data and long periods of training, calibration and learning.

“Getting enough data for the algorithms that power BCIs can be difficult, expensive, or even impossible if paralyzed individuals are not able to produce sufficiently robust brain signals,” said Laurent Itti, a computer science professor and study co-author.

Another obstacle: the technology is user-specific and has to be trained from scratch for each person.

Generating synthetic neurological data

What if, instead, you could create synthetic neurological data — artificially computer-generated data — that could “stand in” for data obtained from the real world?

Enter generative adversarial networks. Known for creating “deep fakes,” GANs can create a virtually unlimited number of new, similar images by running through a trial-and-error process.

Lead author Shixian Wen, a Ph.D. student advised by Itti, wondered if GANs could also create training data for BCIs by generating synthetic neurological data indistinguishable from the real thing.

In an experiment described in the paper, the researchers trained a deep-learning spike synthesizer with one session of data recorded from a monkey reaching for an object. Then, they used the synthesizer to generate large amounts of similar — albeit fake — neural data.

The team then combined the synthesized data with small amounts of new real data — either from the same monkey on a different day, or from a different monkey — to train a BCI. This approach got the system up and running much faster than current standard methods. In fact, the researchers found that GAN-synthesized neural data improved a BCI’s overall training speed by up to 20 times.

“Less than a minute’s worth of real data combined with the synthetic data works as well as 20 minutes of real data,” said Wen.

“It is the first time we’ve seen AI generate the recipe for thought or movement via the creation of synthetic spike trains. This research is a critical step towards making BCIs more suitable for real-world use.”

Additionally, after training on one experimental session, the system rapidly adapted to new sessions, or subjects, using limited additional neural data.

“That’s the big innovation here — creating fake spike trains that look just like they come from this person as they imagine doing different motions, then also using this data to assist with learning on the next person,” said Itti.

Beyond BCIs, GAN-generated synthetic data could lead to breakthroughs in other data-hungry areas of artificial intelligence by speeding up training and improving performance.

“When a company is ready to start commercializing a robotic skeleton, robotic arm or speech synthesis system, they should look at this method, because it might help them with accelerating the training and retraining,” said Itti. “As for using GAN to improve brain-computer interfaces, I think this is only the beginning.”

The paper was co-authored by Tommaso Furlanello, a USC Ph.D. graduate; Allen Yin of Facebook; M.G. Perich of the University of Geneva and L.E. Miller of Northwestern University.


Source: Computers Math - www.sciencedaily.com

Tech companies underreport CO2 emissions

How climate change may shape the world in the centuries to come