AsianScientist (Nov. 26, 2018) – A robot named Affetto is helping researchers increase the diversity and accuracy of facial expressions in machines. They report their findings in Frontiers in Robotics and AI.
Robots are unable to mimic the huge range and asymmetry of natural human facial movements. The materials used to make the ‘skin’ of robots, as well as the intricate engineering and mathematics that drive robotic motion, need to be improved if more expressive robots are to be realized.
In the present study, a trio of researchers at Osaka University, Japan, has developed a method to make their robot express greater ranges of emotion on its face.
“Surface deformations are a key issue in controlling android faces,” said study co-author Professor Minoru Asada of Osaka University. “Movements of their soft facial skin create instability, and this is a big hardware problem we grapple with. We sought a better way to measure and control it.”
The researchers investigated 116 different facial points on Affetto to measure its movements and expressions in three dimensions. Facial points were underpinned by so-called deformation units. Each unit comprises a set of mechanisms that create a distinctive facial contortion, such as lowering or raising of part of a lip or eyelid.
Measurements from deformation units were then subjected to a mathematical model to quantify their surface motion patterns. The researchers were able to use this system to adjust each deformation unit for precise control of Affetto’s facial surface motions.
“Android robot faces have persisted in being a black box problem: they have been implemented but have only been judged in vague and general terms,” said study first author Assistant Professor Hisashi Ishihara of Osaka University. “Our precise findings will let us effectively control android facial movements to introduce more nuanced expressions, such as smiling and frowning.”
Source: Osaka University.
Disclaimer: This article does not necessarily reflect the views of AsianScientist or its staff.