
Design of reinforcement learning for perimeter control using network transmission model based macroscopic traffic simulation
Author(s) -
Jinwon Yoon,
Sunghoon Kim,
Young-Ji Byon,
Hwasoo Yeo
Publication year - 2020
Publication title -
plos one
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.99
H-Index - 332
ISSN - 1932-6203
DOI - 10.1371/journal.pone.0236655
Subject(s) - reinforcement learning , scalability , perimeter , computer science , traffic simulation , transmission (telecommunications) , simulation , scale (ratio) , control (management) , artificial intelligence , engineering , microsimulation , transport engineering , mathematics , telecommunications , physics , geometry , quantum mechanics , database
Perimeter control is an emerging alternative for traffic signal control, which regulates the traffic flows on the periphery of a road network. Some model-based approaches have been suggested earlier for the optimization of perimeter control based on macroscopic fundamental diagrams (MFDs). However, there are several limitations when considering their application to a large-scale urban area because the model-based approaches may not be scalable to multiple regions and inappropriate for handling various effects caused by the shape change of MFDs. Therefore, we propose a model-free and data-driven approach that combines reinforcement learning (RL) with the macroscopic traffic simulation based on the recently developed network transmission model. First, we design four perimeter control models with different macroscopic traffic variables and parametrizations. Then, we validate the proposed models by evaluating their performances with the test demand scenarios at different levels. The validation results show that the model containing travel demand information adapts to a new demand scenario better than the model containing only density-related factors.