z-logo
open-access-imgOpen Access
Comparative Analysis for Content Defined Chunking Algorithms in Data Deduplication
Author(s) -
D. Viji,
Dr.S. Revathy
Publication year - 2021
Publication title -
webology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.259
H-Index - 18
ISSN - 1735-188X
DOI - 10.14704/web/v18si02/web18070
Subject(s) - data deduplication , chunking (psychology) , computer science , hash function , block (permutation group theory) , block size , algorithm , data mining , database , artificial intelligence , mathematics , operating system , geometry , computer security , key (lock)
Data deduplication works on eliminating redundant data and reducing storage consumption. Nowadays more data generated and it was stored in the cloud repeatedly, due to this large volume of storage will be consumed. Data deduplication tries to reduce data volumes disk space and network bandwidth can be to reduce costs and energy consumption for running storage systems. In the data deduplication method, data broken into small size of chunk or block. Hash ID will be calculated for all the blocks then it’s compared with existing blocks for duplication. Blocks may be fixed or variable size, compared with a fixed size of block variable size chunking gives a better result. So the chunking process is the initial task of deduplication to get an optimal result. In this paper, we discussed various content defined chunking algorithms and their performance based on chunking properties like chunking speed, processing time, and throughput.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here