z-logo
Premium
Exploring a system architecture of content‐based publish/subscribe system for efficient on‐the‐fly data dissemination
Author(s) -
Yoon Daegun,
Park Gyudong,
Oh Sangyoon
Publication year - 2020
Publication title -
concurrency and computation: practice and experience
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.309
H-Index - 67
eISSN - 1532-0634
pISSN - 1532-0626
DOI - 10.1002/cpe.6090
Subject(s) - computer science , publication , server , overhead (engineering) , replication (statistics) , dissemination , architecture , cloud computing , distributed computing , partition (number theory) , computer network , database , operating system , telecommunications , art , statistics , mathematics , combinatorics , advertising , business , visual arts
Summary In a cloud‐scale publish/subscribe messaging system, it is difficult to partition subscription data among several servers. Without a sophisticated scheme and a system architecture, the messaging system would either waste resources or fail to deliver messages on time. In this study, we propose DRDA, a dynamic replication degree adjustment technology, for efficient message delivery. The technology calculates and maintains the number of subscription replications at a reasonable level by monitoring the statuses of servers, based on the number of subscription replications and the frequency of event dissemination. To verify the effectiveness of our proposed scheme and system architecture, we build a prototype of a content‐based publish/subscribe system that dynamically adjusts the number of replications among brokers. Furthermore, we compare the load balance, resource overhead, and performance of a publish/subscribe system with DRDA with a publish/subscribe system without DRDA. The experimental results show that DRDA outperforms other approaches under various parameter configurations. We have added the prototype code to a GitHub repository to make it publicly available.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here