z-logo
open-access-imgOpen Access
Dynamic Fault Tolerance Aware Scheduling for Healthcare System on Fog Computing
Author(s) -
Hadeel T. Rajab,
Manal F. Younis
Publication year - 2021
Publication title -
iraqi journal of science
Language(s) - English
Resource type - Journals
eISSN - 2312-1637
pISSN - 0067-2904
DOI - 10.24996/ijs.2021.62.1.29
Subject(s) - computer science , backup , cloud computing , server , workload , schedule , latency (audio) , fault tolerance , load balancing (electrical power) , computer network , distributed computing , scheduling (production processes) , operating system , embedded system , telecommunications , operations management , geometry , mathematics , economics , grid
 Internet of Things (IoT) contributes to improve the quality of life as it supports many applications, especially healthcare systems. Data generated from IoT devices is sent to the Cloud Computing (CC) for processing and storage, despite the latency caused by the distance. Because of the revolution in IoT devices, data sent to CC has been increasing. As a result, another problem added to the latency was increasing congestion on the cloud network. Fog Computing (FC) was used to solve these problems because of its proximity to IoT devices, while filtering data is sent to the CC. FC is a middle layer located between IoT devices and the CC layer. Due to the massive data generated by IoT devices on FC, Dynamic Weighted Round Robin (DWRR) algorithm was used, which represents a load balancing (LB) algorithm that is applied to schedule and distributes data among fog servers by reading CPU and memory values of these servers in order to improve system performance. The results proved that DWRR algorithm provides high throughput which reaches 3290 req/sec at 919 users. A lot of research is concerned with distribution of workload by using LB techniques without paying much attention to Fault Tolerance (FT), which implies that the system continues to operate even when fault occurs. Therefore, we proposed a replication FT technique called primary-backup replication based on dynamic checkpoint interval on FC. Checkpoint was used to replicate new data from a primary server to a backup server dynamically by monitoring CPU values of primary fog server, so that checkpoint occurs only when the CPU value is larger than 0.2 to reduce overhead. The results showed that the execution time of data filtering process on the FC with a dynamic checkpoint is less than the time spent in the case of the static checkpoint that is independent on the CPU status.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here