Run, skeleton, run: skeletal model in a physics-based simulation

Abstract

In this paper, we present our approach to solve a physics-based reinforcement learning challenge “Learning to Run'' with objective to train physiologically-based human model to navigate a complex obstacle course as quickly as possible.The environment is computationally expensive, has a high-dimensional continuous action space and is stochastic. We benchmark state of the art policy-gradient methods and test several improvements, such as layer normalization, parameter noise, action and state reflecting, to stabilize training and improve its sample-efficiency.We found that the Deep Deterministic Policy Gradient method is the most efficient method for this environment and the improvements we have introduced help to stabilize training.Learned models are able to generalize to new physical scenarios, e.g. different obstacle courses.

Publication
In Association for the Advancement of Artificial Intelligence
Sergey Kolesnikov
Sergey Kolesnikov
R&D Lead

Decision making in the wild

comments powered by Disqus

Related