z-logo
open-access-imgOpen Access
Deep reinforcement learning based conflict detection and resolution in air traffic control
Author(s) -
Wang Zhuang,
Li Hui,
Wang Junfeng,
Shen Feng
Publication year - 2019
Publication title -
iet intelligent transport systems
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.579
H-Index - 45
eISSN - 1751-9578
pISSN - 1751-956X
DOI - 10.1049/iet-its.2018.5357
Subject(s) - reinforcement learning , heading (navigation) , conflict resolution , air traffic control , trajectory , computer science , control (management) , artificial intelligence , limit (mathematics) , action (physics) , control theory (sociology) , engineering , real time computing , mathematics , aerospace engineering , law , political science , mathematical analysis , physics , quantum mechanics , astronomy
The primary objective of this study is to incorporate the deep reinforcement learning (DRL) technique in conflict detection and resolution (CD&R) control strategies to generate an optimised trajectory for air traffic controllers as reference, in order to improve efficiency and reduce the amount of heading angle change. A DRL environment which can be applied to CD&R agent training is developed. The agent receives the current state of multiple aircrafts in a sector and generates an action to change the heading angle of an aircraft to avoid conflict. A K ‐Control Actor‐Critic algorithm is proposed to limit the number of control times and a two‐dimensional continuous action selection policy is utilised. The simulation results show the feasibility of DRL applied in CD&R and there is an obvious advantage in computational efficiency.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here