Site icon IATA News

Enabling humanoid robot movement with imitation learning and mimicking of animal behaviors – TechCrunch

Over the past two decades, humanoid robots have greatly improved their ability to perform functions like grasping objects and using computer vision to detect things since Honda’s release of the ASIMO robot in 2000. Despite these improvements, their ability to walk, jump and perform other complex legged motions as fluidly as humans has continued to be a challenge for roboticists.

In recent years, new advances in robot learning and design are using data and insights from animal behavior to enable legged robots to move in much more human-like ways. 

Researchers from Google and UC Berkeley published work earlier this year that showed a robot learning how to walk by mimicking a dog’s movements using a technique called imitation learning. Separate work showed a robot successfully learning to walk by itself through trial and error using deep reinforcement learning algorithms.

Imitation learning in particular has been used in robotics for various use cases, such as OpenAI’s work in helping a robot grasp objects by imitation, but its use in robot locomotion is new and encouraging. It can enable a robot to take input data generated by an expert performing the actions to be learned, and combine it with deep learning techniques to enable more effective learning of movements. 

Much of the recent work using imitation and broader deep learning techniques has involved small-scale robots, and there will be many challenges to overcome to apply the same capabilities to life-size robots, but these advances open new pathways for innovation in improving robot locomotion. 

The inspiration from animal behaviors has also extended to robot design, with companies such as Agility Robotics and Boston Dynamics incorporating force modeling techniques and integration of full-body sensors to help their robots more closely mimic how animals execute complex movements. 



Source link

Exit mobile version