AsianScientist (Dec. 3, 2021) – Imagine you’re seated at the table for dinner and quick as a flash, something leaps up and snatches away your steak. You could probably, just by looking, instinctively tell that the blur of movement was your pet cat. A computer, on the other hand, would not have found it quite so easy.
While machines can now seamlessly figure out what they’re looking at—think self-driving cars or the facial recognition on your phone lock screen—teaching them how to do so is a challenge that dates back decades.
For machines to see like humans, they should be equipped with the same visual processing structures that human brains use. Few people are more familiar with this fact than Professor Kunihiko Fukushima, the creator of the neocognitron, an artificial neural network inspired by the mammalian brain.
Building the basics of sight
As one of the earliest neural networks that could see and learn, Fukushima’s neocognitron is recognized as the foundation for modern developments in artificial intelligence (AI) and machine learning in neural networks. However, Fukushima didn’t start his career with the intention to work on AI—after all, it was only in 1956 that the term was first coined, meaning that the field was still in its infancy at the time.
“I wasn’t interested in machine learning initially,” he admitted. “I was more interested in the brain and how information was processed in the brains of animals.”
After graduating with a Bachelor’s degree from Kyoto University’s Department of Electrical Engineering in 1958, Fukushima joined the Japanese Broadcasting Corporation (NHK). There, he studied the coding of TV signals at the NHK Technical Research Laboratories, later obtaining his PhD in electrical engineering from Kyoto University as well.
By 1965, Fukushima joined the newly established Broadcasting Science Research Laboratories at the NHK, where he dived into his real research interest: the mechanics of visual and auditory information processing.
That same year, alongside a team of engineers and biologists, Fukushima sought to understand how the visual and auditory signals broadcast through radio TV were received and processed by the biological brain—the ultimate destination of such signals.
Focusing on the mammalian visual cortex, or the brain structure responsible for processing visual stimuli, he started building models of the neuronal network in the visual cortex to further understand what happened in the brain as it viewed images.
One of the early neural networks Fukushima designed was based on a model of a cat’s primary visual cortex in which two types of cells, simple and complex, play a role in pattern recognition. By mimicking these cells and arranging them in a hierarchical manner, the nascent neural network managed to recognize images and patterns.
When presented with an image, the cells would react to curved, but not straight, lines. Operating on this principle, Fukushima’s model could reliably recognize and analyze the lines of the image, comparing it against an internal data set of images to determine what it was.
How to train an artificial brain
While recognizing the right image was a feat in itself, Fukushima also wanted his model to also have the ability to learn—identifying new images and patterns that the researchers had not manually trained it to.
By increasing the layers of the network and equipping it with a form of unsupervised learning called competitive learning, Fukushima successfully created a network that could independently recognize patterns without supervision.
“We gave it the name cognitron, combining cognition and -tron. The name is similar to the perceptron, which combines perception and -tron. The perceptron—popular in the 1950s and 1960s—uses a supervised learning algorithm,” he said.
His next step was then to advance the cognitron’s abilities to recognize patterns that were scaled, shifted, or obscured in some way.
It was a formidable problem, and so Fukushima found himself returning to his previous research on the physiology of the brain for inspiration: namely the optic tectum, a visual processing structure in the brain that receives input directly from the retina.
He was intrigued by how the neural connections between the retina and optic tectum would self-regenerate after being severed, even if half the tectum or retina were removed along with the initial cut.
“This inspired me to introduce a seed cell that could generate a network, allowing the neural network to self-organize to maintain the condition of shared connections,” Fukushima explained.
These modifications made the original cognitron larger, more layered and capable of self-organizing, giving rise to the neocognitron, a multi-layered neural network with more advanced image recognition capabilities. In 1980, Fukushima published his design of the neocognitron, setting the groundwork for the artificial neural networks in the decades to come.
Listen to one, understand ten
The ability of such networks to recognize images and patterns now play a vast range of roles in modern life, from cameras to security systems and self-driving cars. But in the neocognitron’s early days, Fukushima didn’t expect that machine learning or AI as a whole would eventually become so widespread.
“AI in the 1980s was quite different from what it is today,” he recalled. “At the time, AI was only a minor technique.”
Now however, Fukushima expects the technology’s applications and capabilities to continue to grow.
“In Japan, we have a common saying that when you “listen to one, understand ten”, which means hearing one word is enough to derive the meaning of ten words. Right now, AI is listening to billions and understanding millions,” he said. “But the goal for AI in the future is to listen to millions and understand billions.”
Likewise, Fukushima himself has no intention of slowing down. At 85 years old, he remains an active and prolific researcher—having published a paper nearly every year since 1961, with his most recent publication in January 2021, and is currently creating a network that only requires a small amount of training data.
At the same time, the humble trailblazer continues to work on his original research mission: to understand the mechanism of the brain, with the help of the same AI techniques he had pioneered.
As a researcher whose career has spanned over half a century, and whose impact will continue to resonate for decades more, Fukushima’s advice to young researchers is to always consider research from disciplines beyond their own fields, especially when confronted with problems that need solving.
“Try writing a book,” he concluded. “It will naturally force you to do so.”
———
Copyright: Asian Scientist Magazine; Photo: Kunihiko Fukushima.
Disclaimer: This article does not necessarily reflect the views of AsianScientist or its staff.