z-logo
open-access-imgOpen Access
Overall computing offloading strategy based on deep reinforcement learning in vehicle fog computing
Author(s) -
Tan HaiZhong,
Zhu Limin
Publication year - 2020
Publication title -
the journal of engineering
Language(s) - English
Resource type - Journals
ISSN - 2051-3305
DOI - 10.1049/joe.2020.0134
Subject(s) - reinforcement learning , markov decision process , computer science , q learning , artificial intelligence , process (computing) , set (abstract data type) , base station , markov process , distributed computing , computer network , statistics , mathematics , programming language , operating system
In order to solve the problem of network congestion caused by a large number of data requests generated by intelligent vehicles in LTE‐V network, a brand‐new fog server with fog computing function is deployed on both the cellular base stations and vehicles, and an LTE‐V‐fog network is constructed to deal with delay‐sensitive service requests in the Internet of vehicles. The weighted total cost combines delay and energy consumption is taken as the optimisation goal. First a reinforcement learning algorithm Q ‐learning based on Markov decision process is proposed to solve the problem for minimising weighted total cost. Furthermore, this study specifically explains the setting method of three elements for reinforcement learning‐state, action and reward in the fog computing system. Then for reducing the scale of problems and improving efficiency, the authors set up a pre‐classification process before reinforcement learning to control the possible values of actions. However, considering that as the number of vehicles in system increases, Q ‐learning method based on recorded Q values may fall into a dimensional disaster. Therefore, the authors propose a deep reinforcement learning method, deep Q ‐learning network (DQN), which combines deep learning and Q ‐learning. Experimental results show that the proposed method has advantages.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here