z-logo
open-access-imgOpen Access
Edge Server Placement and Task Allocation for Maximum Delay Reduction
Author(s) -
Koki Shibata,
Sumiko Miyata
Publication year - 2025
Publication title -
ieee open journal of the communications society
Language(s) - English
Resource type - Magazines
eISSN - 2644-125X
DOI - 10.1109/ojcoms.2025.3593641
Subject(s) - communication, networking and broadcast technologies
When edge computing is deployed for delay-sensitive applications such as autonomous driving systems and online gaming, it is important to reduce the maximum delay because real-time performance for all users must be ensured from Quality-of-Service (QoS) perspective. The primary delays in edge computing include network delay during data transmission and waiting time at the edge server. Since the waiting time at edge servers depends on server utilization, an increase in utilization bias leads to an increase in maximum delay. If a user is extremely far from the edge server, the network delay for that user will also increase. Conventional edge computing methods focus on reducing the average propagation delay of user-processing requests (tasks). However, these methods increase the utilization variance of each edge server, thus increasing the maximum delay. In this paper, we propose a method for determining both edge server placement and task allocation to reduce the maximum delay. Our method uses a genetic algorithm to optimize server utilization and the distance between users and servers. The maximum delay has been successfully reduced compared with that using conventional methods by simultaneously optimizing the server utilization and distance between users and servers.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom