EER-RL: Energy-Efficient Routing Based on Reinforcement Learning
Author(s) -
Vially Kazadi Mutombo,
SeungYeon Lee,
Jusuk Lee,
Jiman Hong
Publication year - 2021
Publication title -
mobile information systems
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.346
H-Index - 34
eISSN - 1875-905X
pISSN - 1574-017X
DOI - 10.1155/2021/5589145
Subject(s) - computer science , reinforcement learning , routing protocol , computer network , distributed computing , wireless routing protocol , scalability , zone routing protocol , routing (electronic design automation) , link state routing protocol , efficient energy use , dynamic source routing , artificial intelligence , database , electrical engineering , engineering
Wireless sensor devices are the backbone of the Internet of things (IoT), enabling real-world objects and human beings to be connected to the Internet and interact with each other to improve citizens’ living conditions. However, IoT devices are memory and power-constrained and do not allow high computational applications, whereas the routing task is what makes an object to be part of an IoT network despite of being a high power-consuming task. Therefore, energy efficiency is a crucial factor to consider when designing a routing protocol for IoT wireless networks. In this paper, we propose EER-RL, an energy-efficient routing protocol based on reinforcement learning. Reinforcement learning (RL) allows devices to adapt to network changes, such as mobility and energy level, and improve routing decisions. The performance of the proposed protocol is compared with other existing energy-efficient routing protocols, and the results show that the proposed protocol performs better in terms of energy efficiency and network lifetime and scalability.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom