
A Comparison of Fair Sharing Algorithms for Regulating Search as a Service API
Author(s) -
Sikha Bagui,
Evorell Fridge
Publication year - 2021
Publication title -
transactions on networks and communications
Language(s) - English
Resource type - Journals
ISSN - 2054-7420
DOI - 10.14738/tnc.86.9633
Subject(s) - computer science , algorithm , token bucket , fair queuing , computer network , throughput , proportionally fair , weighted fair queueing , generalized processor sharing , operating system , security token , quality of service , fair share scheduling , dynamic priority scheduling , rate monotonic scheduling , wireless , round robin scheduling
Providers of a Search as a Service (SaaS) environment must ensure that their users will not monopolize the service or use more than their fair share of resources. Fair sharing algorithms have long been used in computer networking to balance access to a router or switch, and some of these algorithms have also been applied to the control of queries submitted to search engine APIs. If a search query’s execution cost can be reliably estimated, fair sharing algorithms can be applied to the input of a SaaS API to ensure everyone has equitable access to the search engine.
The novelty of this paper lies in presenting a Single-Server Max-Min Fair Deficit Round Robin algorithm, a modified version of the Multi-Server Max-Min Fair Deficit Round Robin algorithm. The Single-Server Max-Min Fair Deficit Round Robin algorithm is compared to three other fair sharing algorithms, token-bucket, Deficit Round Robin (DRR), and Peng and Plale’s [1] Modified Deficit Round Robin (MDRR) in terms of three different usage scenarios, balanced usage, unbalanced usage as well as an idle client usage, to determine which is the most suitable fair sharing algorithm for use in regulating traffic to a SaaS API. This research demonstrated that the Single-Server Max-Min Fair DRR algorithm provided the highest throughput of traffic to the search engine while also maintaining a fair balance of resources among clients by re-allocating unused throughput to clients with saturated queues so a max-min allocation was achieved.