Reinforcement Learning for Collision-free Flight Exploiting Deep Collision Encoding
On this page
This work contributes a novel deep navigation policy that enablescollision-free flight of aerial robots based on a modular approach exploitingdeep collision encoding and reinforcement learning. The proposed solutionbuilds upon a deep collision encoder that is trained on both simulated and realdepth images using supervised learning such that it compresses thehigh-dimensional depth data to a low-dimensional latent space encodingcollision information while accounting for the robot size. This compressedencoding is combined with an estimate of the robot’s odometry and the desiredtarget location to train a deep reinforcement learning navigation policy thatoffers low-latency computation and robust sim2real performance. A set ofsimulation and experimental studies in diverse environments are conducted anddemonstrate the efficiency of the emerged behavior and its resilience inreal-life deployments.
Further reading
- Access Paper in arXiv.org