Bees’ Perfect Landing Inspires Robot Aircraft
Scientists have discovered how the honeybee can land anywhere with precision – and the knowledge may help build incredible robot aircraft.
Asian Scientist (Nov. 7, 2013) – Scientists have discovered how the honeybee can land anywhere with utmost precision and grace – and the knowledge may soon help build incredible robot aircraft.
By sensing how rapidly their destination ‘zooms in’ as they fly towards it, honeybees can control their flight speed in time for a perfect touchdown without needing to know how fast they’re flying or how far away the destination is.
The discovery, published in the Proceedings of the National Academy of Sciences, may advance the design of cheaper, lighter robot aircraft that only need a video camera to land safely on surfaces of any orientation, says Professor Mandyam Srinivasan at the University of Queensland.
“Orchestrating a safe landing is one of the greatest challenges for flying animals and airborne vehicles,” says Professor Srinivasan.
“To achieve a smooth landing, it’s essential to slow down in time for the speed to be close to zero at the time of touchdown.”
Humans can find out their distance from an object using stereo-vision – because their two eyes, which are separated by about 65 mm, capture different views of the object. However, because insects have close-set eyes, can’t do the same thing, explains Professor Srinivasan.
“So in order to land on the ground, they use their eyes to sense the speed of the image of the ground beneath them,” he says.
“By keeping the speed of this image constant, they slow down automatically as they approach the ground, stopping just in time for touchdown.
However, in the natural world, bees would only occasionally land on flat, horizontal surfaces. So it’s important to know how they land on rough terrain, ridges, vertical surfaces or flowers with the same delicacy and grace.
In the study, researchers trained honeybees to land on discs that were placed vertically, and filmed them with high-speed video cameras.
“The boards carried spiral patterns that could be rotated at various speeds by a motor,” says Professor Srinivasan.
When the researchers spun the spiral, making it appear to expand, the bees ‘hit the brakes’ because they thought they were approaching the board much faster than they really were.
When they spun the spiral the other way to make it appear to contract, the bees sped up, sometimes crashing into the disc. According to the researchers, this shows that landing bees keep track of how rapidly the image ‘zooms in’, and they adjust their flight speed to keep this ‘zooming rate’ constant.
“Imagine you’re in space and you don’t know how far away you are from a star,” Professor Srinivasan says.
“As you fly towards it, the other stars ‘move away’ and it becomes the focus. Then when the star starts to ‘zoom in’ faster than the regular rate, you’ll slow down to keep the ‘zooming rate’ constant.
“It’s the same for bees – when they’re about to reach a flower, the image of the flower will expand faster than usual. This causes them to slow down more and more as they get closer, eventually stopping when they reach it.”
The researchers also developed a mathematical model for guiding landings, based on the bees’ landing strategy.
Professor Srinivasan says unlike all current engineering-based methods, this visually guided technique does not require knowledge about the distance to the surface or the speed at which the surface is approached.
According to Professor Srinivasan, a limitation of current robot aircraft is the need for technology such as radars, sonars or laser beams to work out how far the surface is. Not only is the equipment expensive and cumbersome, using active radiation can also give the position of the aircraft away.
In contrast, the researchers’ bee-inspired, vision-based system requires only a simple video camera that can be found in smartphones.
The article can be found at Baird E et al. (2013) A Universal Strategy For Visually Guided Landing.
Source: University of Queensland; Photo: Tang Yew Chung.
Disclaimer: This article does not necessarily reflect the views of AsianScientist or its staff.