z-logo
Premium
On using reinforcement learning for network slice admission control in 5G: Offline vs. online
Author(s) -
Bakri Sihem,
Brik Bouziane,
Ksentini Adlen
Publication year - 2021
Publication title -
international journal of communication systems
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.344
H-Index - 49
eISSN - 1099-1131
pISSN - 1074-5351
DOI - 10.1002/dac.4757
Subject(s) - computer science , reinforcement learning , regret , quality of service , control (management) , dilemma , network performance , revenue , matching (statistics) , admission control , q learning , computer network , quality (philosophy) , artificial intelligence , machine learning , philosophy , statistics , mathematics , accounting , epistemology , business
Summary Achieving a fair usage of network resources is of vital importance in Slice‐ready 5G network. The dilemma of which network slice to accept or to reject is very challenging for the Infrastructure Provider (InfProv). On one hand, InfProv aims to maximize the network resources usage by accepting as many network slices as possible; on the other hand, the network resources are limited, and the network slice requirements regarding Quality of Service (QoS) need to be fulfilled. In this paper, we devise three admission control mechanisms based on Reinforcement Learning, namely, Q‐Learning, Deep Q‐Learning, and Regret Matching, which allow deriving admission control decisions (policy) to be applied by InfProv to admit or reject network slice requests. We evaluated the three algorithms using computer simulation, showing results on each mechanism's performance in terms of maximizing the InfProv revenue and their ability to learn offline or online.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here