z-logo
Premium
Context‐aware pub/sub control method using reinforcement learning
Author(s) -
Kim Joohyun,
Hong Seohee,
Hong Sengphil,
Kim Jaehoon
Publication year - 2020
Publication title -
concurrency and computation: practice and experience
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.309
H-Index - 67
eISSN - 1532-0634
pISSN - 1532-0626
DOI - 10.1002/cpe.5727
Subject(s) - mqtt , computer science , message queue , reinforcement learning , implementation , context (archaeology) , distributed computing , protocol (science) , computer network , internet of things , embedded system , artificial intelligence , software engineering , biology , medicine , paleontology , alternative medicine , pathology
Summary Reinforcement learning (RL) is utilized in a wide range of real‐world applications. Typical applications include single agent‐based RL. However, most practical tasks require multiple agents for cooperative control processes. Multiple‐agent RL demands complicated design, and numerous design possibilities should be considered for its practical usefulness. We propose two RL implementations for a message‐queuing telemetry transport (MQTT) protocol system. Two types of implementations improve the communication efficiency of MQTT: (i) single‐broker‐agent implementation and (ii) multiple‐publisher‐agents implementation. We focused on different message priorities in a dynamic environment for each implementation. The proposed implementations improve communication efficiency by adjusting the loop cycle time of the broker or by learning the message importance. The proposed MQTT control scheme improves the battery efficiency of Internet‐of‐Things (IoT)‐based devices with relatively insufficient battery power.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here