Artificial Intelligence Used To Decode Brain Signals And Read Minds

Researchers have used artificial intelligence to predict what a person sees or imagines based on brain scans.

AsianScientist (Jun. 12, 2017) – Using neural network-based artificial intelligence (AI), a team of researchers from Japan have developed a way of decoding and predicting what a person is seeing or imagining based on fMRI scans. Their results have been published in Nature Communications.

Scanning the brain to decode the contents of the mind has been a subject of intense research interest for some time. As studies have progressed, scientists have gradually been able to interpret what test subjects see, remember, imagine, and even dream.

There have been significant limitations, however, beginning with a necessity to extensively catalog each subject’s unique brain patterns, which are then matched with a small number of pre-programmed images. These procedures require that subjects undergo lengthy and expensive fMRI testing.

“When we gaze at an object, our brains process these patterns hierarchically, starting with the simplest and progressing to more complex features,” explained team leader Yukiyasu Kamitani, a professor at Kyoto University.

“The AI we used works on the same principle. Named ‘Deep Neural Network,’ or DNN, it was trained by a group now at Google.”

The team from Kyoto University and ATR (Advanced Telecommunications Research) Computational Neuroscience Laboratories discovered that brain activity patterns can be decoded or translated into signal patterns of simulated neurons in the DNN when both are shown the same image.

Additionally, the researchers found that lower and higher visual areas in the brain were better at decoding respective layers of the DNN, revealing a homology between the human brain and the neural network.

“We tested whether a DNN signal pattern decoded from brain activity can be used to identify seen or imagined objects from arbitrary categories,” explained Kamitani. “The decoder takes neural network patterns and compares these with image data from a large database. Sure enough, the decoder could identify target objects with high probability.”

As brain decoding and AI development advance, Kamitani hopes to improve the image identification accuracy of their technique.

“Bringing AI research and brain science closer together could open the door to new brain-machine interfaces, perhaps even bringing us closer to understanding consciousness itself,” he added.



The article can be found at: Horikawa & Kamitani (2017) Generic Decoding of Seen and Imagined Objects Using Hierarchical Visual Features.

———

Source: Kyoto University.
Disclaimer: This article does not necessarily reflect the views of AsianScientist or its staff.

Asian Scientist Magazine is an award-winning science and technology magazine that highlights R&D news stories from Asia to a global audience. The magazine is published by Singapore-headquartered Wildtype Media Group.

Related Stories from Asian Scientist