MS Defense: Kyungtae Ko
Speaker:Kyungtae Ko
Visual Object Tracking for UAVs Using Deep Reinforcement Learning
Integration of artificial intelligence and unmanned aerial vehicles (UAVs) has been an active research topic in recent years, especially when UAVs are required to implement difficult tasks which can’t be done easily with human control. Usually UAVs use multiple sensors such as top-down camera or LiDAR sensor to gather full information of environments, and main processor machine calculates all necessary trajectory for UAVs. However, such environment-specific methodology is not applicable to real-world problems mostly due to the complexity which incapacitates proper sensor installation around space. This thesis proposes an approach which uses monocular on-board camera and reinforcement learning-based model to follow the detected object. This approach is more cost efficient and environment-adaptive compare to the previous approaches which tend to use multiple sensors and pre-calculated trajectory. Our model extended the previous Deep Double Q-network with dueling architecture (D3QN) model, altered an action table and a reward function, which enables 3-dimensional movements, and combined object detection using MobileNet, which adds bounding box information to image input of training network. In addition, the convergence-based exploration and exploitation is adaptively applied to D3QN network aligned with epsilon-greedy algorithm. The tests are done in different simulation environments with different complexity and difficulty. The Microsoft-supported quadrotor simulating API, ‘Airsim’ module is used for the testing. Results shows the model tracks the detected object, the human figure, without colliding with obstacle along the path, and trained faster with convergence-based exploration algorithm.
Committee: Ali Jannesari (major professor), Jin Tian, Rafael Radkowski