|Improvement of Dynamic Window Approach Using Reinforcement Learning in Dynamic Environments
Jinseok Kim and Gi-Hun Yang*
International Journal of Control, Automation, and Systems, vol. 20, no. 9, pp.2983-2992, 2022
Abstract : In environments where dynamic or unknown obstacles exist, a robot needs to use collision avoidance algorithms to protect itself and provide personal safety. Recently, many researchers have used machine learning techniques to study obstacle avoidance in dynamic environments. However, these studies are insufficient for providing a velocity model for actual driving because of the limited number of motions and parameters tuning according to environmental changes. This paper proposes an algorithm that combines the dynamic window approach (DWA) and deep reinforcement learning to build a velocity model to avoid obstacles. This method adds and subtracts the linear and angular velocities of the robot from the DWA calculations using the designed learning module. Through this configuration, many robotic motions can be generated even with limited action functions. In our experiments, the application of this learning module showed a 23.7% higher rate of obstacle avoidance than with DWA alone. The experimental results verified that the proposed method improved the performance of obstacle avoidance for multiple dynamic environments without any additional work. It has been confirmed that this method can be applied to real robots as well.
Dynamic window approach (DWA), mobile robot, obstacle avoidance, reinforcement learning.
Download PDF : Click this link