Deep Learning As A ‘Gait-Way’ To Identity

Using Siamese network architectures for deep learning, researchers in Japan have designed an improved gait recognition method to identify people from video surveillance records.

AsianScientist (Nov. 20, 2017) – Deep learning has enabled scientists in Japan to identify people using their gait patterns. These findings have been published in IEEE Transactions on Circuits and Systems for Video Technology.

Biometric-based person recognition methods have been extensively explored for various applications, such as access control, surveillance and forensics. Biometric verification involves any means by which a person can be uniquely identified through biological traits. In addition to facial features, fingerprints and hand geometry, an individual’s gait—his or her manner of walking—can also be used as a biometric marker.

Gait is a practical trait for surveillance and forensics because it can be captured at a distance on video. In fact, gait recognition has already been used in practical cases in criminal investigations. However, gait recognition is susceptible to intra-subject variations, such as view angle, clothing, walking speed, shoes and carrying status.

In this study, a team of researchers at Osaka University harnessed the capabilities of deep learning frameworks to improve gait recognition. Although current convolutional neural network frameworks take into account computer vision, pattern recognition and biometrics, the researchers noted that such frameworks miss out aspects on verification versus identification, and can be fooled by spatial displacement, that is, when the subject moves from one location to another.

Hence, the researchers employed Siamese network architectures which are insensitive to spatial displacement. The difference between a matching pair of gaits and identities is calculated at the last layer of computation, after passing through the convolution and max pooling layers, which reduces the gait image dimensionality and allows for assumptions to be made about hidden features. Siamese network architectures can therefore be expected to have higher performance under considerable view differences.

“We conducted experiments for cross-view gait recognition and confirmed that the proposed architectures outperformed the state-of-the-art benchmarks in accordance with their suitable situations of verification and identification tasks, as well as view differences,” said Professor Yasushi Makihara of Osaka University who is a coauthor of the study.

Because spatial displacement is caused not only by view difference but also walking speed difference, carrying status difference, clothing difference and other factors, the researchers plan to further evaluate their proposed method for gait recognition with spatial displacement caused by other covariates.


The article can be found at: Takemura et al. (2017) On Input/Output Architectures for Convolutional Neural Network-Based Cross-View Gait Recognition.

———

Source: Osaka University; Photo: Pexels.
Disclaimer: This article does not necessarily reflect the views of AsianScientist or its staff.

Asian Scientist Magazine is an award-winning science and technology magazine that highlights R&D news stories from Asia to a global audience. The magazine is published by Singapore-headquartered Wildtype Media Group.

Related Stories from Asian Scientist