z-logo
open-access-imgOpen Access
Novel deep reinforcement learning‐based delay‐constrained buffer‐aided relay selection in cognitive cooperative networks
Author(s) -
Huang Chong,
Zhong Jie,
Gong Yu,
Abdullah Zaid,
Chen Gaojie
Publication year - 2020
Publication title -
electronics letters
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.375
H-Index - 146
ISSN - 1350-911X
DOI - 10.1049/el.2020.1495
Subject(s) - reinforcement learning , relay , throughput , computer science , selection (genetic algorithm) , interference (communication) , constraint (computer aided design) , selection algorithm , action selection , mathematical optimization , artificial intelligence , computer network , wireless , engineering , power (physics) , channel (broadcasting) , mathematics , telecommunications , mechanical engineering , physics , quantum mechanics , neuroscience , perception , biology
In this Letter, a deep reinforcement learning‐based approach is proposed for the delay‐constrained buffer‐aided relay selection in a cooperative cognitive network. The proposed learning algorithm can efficiently solve the complicated relay selection problem, and achieves the optimal throughput when the buffer size and number of relays are large. In particular, the authors use the deep‐Q‐learning to design an agent to estimate a specific action for each state of the system, which is then utilised to provide an optimum trade‐off between throughput and a given delay constraint. Simulation results are provided to demonstrate the advantages of the proposed scheme over conventional selection methods. More specifically, compared to the max‐ratio selection criteria, where the relay with the highest signal‐to‐interference ratio is selected, the proposed scheme achieves a significant throughput gain with higher throughput‐delay balance.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here