Smart Glasses Now Come With Virtual Keyboard Feature

The K-Glass 3 smart glasses enables users to type on a virtual keyboard and even play the an augmented reality piano.

AsianScientist (Mar. 7, 2016) – Researchers at the Korea Advanced Institute of Science and Technology (KAIST) have developed a new version of their K-glass smart glasses that boasts augmented reality typing capabilities.

This latest version, which the researchers are calling K-Glass 3, provides users with a virtual text keyboard for Internet surfing and even a set of playable piano keys.

Currently, most wearable head-mounted displays (HMDs) suffer from a lack of rich user interfaces, short battery lives and heavy weight. Some HMDs, such as Google Glass, use a touch panel and voice commands as an interface, but they are considered merely an extension of smartphones and not wearable smart glasses.

Recently, gaze recognition was proposed for HMDs, and was included in an earlier version called K-Glass 2. But gaze is insufficient to achieve a natural user interface (UI) and user experience (UX), such as user’s gesture recognition, due to its limited interactivity and lengthy gaze-calibration time, which can be up to several minutes.

As a solution, Professor Yoo Hoi-Jun and his team from KAIST’s Electrical Engineering Department recently developed K-Glass 3 with a low-power natural UI and UX processor to enable convenient typing and screen pointing on HMDs just by using bare hands.

K-Glass 3 can detect hands and recognize their movements to provide users with augmented reality applications such as a virtual text or piano keyboard. Credit: KAIST
K-Glass 3 can detect hands and recognize their movements to provide users with augmented reality applications such as a virtual text or piano keyboard. Credit: KAIST

The stereo-vision camera, located on the front of K-Glass 3, works in a manner similar to three dimension (3D) sensing in human vision. The camera’s two lenses, displayed horizontally from one another just like depth perception produced by left and right eyes, take pictures of the same objects or scenes and combine these two different images to extract spatial depth information, which is necessary to reconstruct 3D environments.

The research team adopted deep-learning-multi core technology dedicated for mobile devices to recognize user’s gestures based on the depth information. This technology has greatly improved the Glass’s recognition accuracy with images and speech, while shortening the time needed to process and analyze data.

Yoo said, “We have succeeded in fabricating a low-power multi-core processor that consumes only 126.1 milliwatts of power with a high efficiency rate. It is essential to develop a smaller, lighter, and low-power processor if we want to incorporate the widespread use of smart glasses and wearable devices into everyday life.

“K-Glass 3’s more intuitive UI and convenient UX permit users to enjoy enhanced AR experiences such as a keyboard or a better, more responsive mouse.”

———

Source: KAIST.
Disclaimer: This article does not necessarily reflect the views of AsianScientist or its staff.

Asian Scientist Magazine is an award-winning science and technology magazine that highlights R&D news stories from Asia to a global audience. The magazine is published by Singapore-headquartered Wildtype Media Group.

Related Stories from Asian Scientist