z-logo
open-access-imgOpen Access
Using Car to Infrastructure Communication to Accelerate Learning in Route Choice
Author(s) -
Guilherme Dytz dos Santos,
Ana L. C. Bazzan,
Arthur Prochnow Baumgardt
Publication year - 2021
Publication title -
journal of information and data management
Language(s) - English
Resource type - Journals
ISSN - 2178-7107
DOI - 10.5753/jidm.2021.1935
Subject(s) - computer science , reinforcement learning , task (project management) , metropolitan area , order (exchange) , kernel (algebra) , aggregate (composite) , artificial intelligence , engineering , medicine , materials science , mathematics , systems engineering , finance , pathology , combinatorics , economics , composite material
The task of choosing a route to move from A to B is not trivial, as road networks in metropolitan areas tend to be over crowded. It is important to adapt on the fly to the traffic situation. One way to help road users (driver or autonomous vehicles for that matter) is by using modern communication technologies.In particular, there are reasons to believe that the use of communication between the infrastructure (network), and the demand (vehicles) will be a reality in the near future. In this paper, we use car-to-infrastructure (C2I) communication to investigate whether the road users can accelerate their learning processes regarding route choice by using reinforcement learning (RL). The kernel of our method is a two way communication, where road users communicate their rewards to the infrastructure, which, in turn, aggregate this information locally and pass it to other users, in order to accelerate their learning tasks. We employ a microscopic simulator in order to compare this method with two others (one based on RL without communication and a classical iterative method for traffic assignment). Experimental results using a grid and a simplification of a real-world network show that our method outperforms both.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here