z-logo
open-access-imgOpen Access
Analytical Design o f t he D IS Architecture: The Hybrid Model
Author(s) -
Mahesh S. Nayak,
M. Hanumanthappa,
Divya Prakash,
H. V. Dattasmita
Publication year - 2020
Publication title -
international journal of innovative technology and exploring engineering
Language(s) - English
Resource type - Journals
ISSN - 2278-3075
DOI - 10.35940/ijitee.d1454.039520
Subject(s) - computer science , scalability , cloud computing , distributed computing , volume (thermodynamics) , architecture , data intensive computing , field (mathematics) , fault tolerance , throughput , big data , the internet , conceptualization , database , data science , data mining , operating system , grid computing , art , physics , geometry , mathematics , quantum mechanics , artificial intelligence , pure mathematics , visual arts , wireless , grid
In the last decades, and due to emergence of Internet appliance, there is a strategical increase in the usage of data which had a high impact on the storage and mining technologies. It is also observed that the scientific/research field’s produces the zig-zag structure of data viz., structured, semi-structured, and unstructured data. Comparably, processing of such data is relatively increased due to rugged requirements. There are sustainable technologies to address the challenges and to expedite scalable services via effective physical infrastructure (in terms of mining), smart networking solutions, and useful software approaches. Indeed, the Cloud computing aims at data-intensive computing, by facilitating scalable processing of huge data. But still, the problem remains unaddressed with reference to huge data and conversely the data is growing exponentially faster. At this juncture, the recommendable algorithm is, the well-known model i.e., MapReduce, to compress the huge and voluminous data. Conceptualization of any problem with the current model is, less fault-tolerant and reliability, which may be surmounted by Hadoop architecture. On Contrary case, Hadoop is fault tolerant, and has the high throughput which is recommendable for applications having huge volume of data sets, file system requiring the streaming access. The paper examines and unravels, what efficient architectural/design changes are necessary to bring the benefits of the Everest model, HBase algorithm, and the existing MR algorithms.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here