Scalarized Q Multi-Objective Reinforcement Learning for Area Coverage Control and Light Control Implementation
Author(s) -
Akkachai Phuphanin,
Wipawee Usaha
Publication year - 2018
Publication title -
ecti transactions on electrical engineering electronics and communications
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.148
H-Index - 7
ISSN - 1685-9545
DOI - 10.37936/ecti-eec.2018162.171333
Subject(s) - testbed , reinforcement learning , computer science , energy consumption , control (management) , software deployment , wireless sensor network , control area , wireless , energy (signal processing) , efficient energy use , real time computing , artificial intelligence , computer network , engineering , telecommunications , mathematics , statistics , electrical engineering , operating system
Coverage control is crucial for the deployment of wireless sensor networks (WSNs). However, most coverage control schemes are based on single objective optimization such as coverage area only, which do not consider other contradicting objectives such as energy consumption, the number of working nodes, wasteful overlapping areas. This paper proposes on a Multi-Objective Optimization (MOO) coverage control called Scalarized Q Multi-Objective Reinforcement Learning (SQMORL). The two objectives are to achieve the maximize area coverage and to minimize the overlapping area to reduce energy consumption. Performance evaluation is conducted for both simulation and multi-agent lighting control testbed experiments. Simulation results show that SQMORL can obtain more efficient area coverage with fewer working nodes than other existing schemes. The hardware testbed results show that SQMORL algorithm can find the optimal policy with good accuracy from the repeated runs.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom