z-logo
open-access-imgOpen Access
BST Algorithm for Duplicate Elimination in Data Warehouse
Author(s) -
Payal Pahwa,
Rashmi Chhabra
Publication year - 2013
Publication title -
international journal of management and information technology
Language(s) - English
Resource type - Journals
ISSN - 2278-5612
DOI - 10.24297/ijmit.v4i1.4636
Subject(s) - data cleansing , data warehouse , computer science , data quality , database , quality (philosophy) , data mining , reliability (semiconductor) , data reliability , focus (optics) , data science , algorithm , operations management , engineering , metric (unit) , philosophy , power (physics) , physics , optics , epistemology , quantum mechanics
Data warehousing is an emerging technology and has proved to be very important for an organization. Today every  business organization needs accurate and large amount of information to make proper decisions. For taking the business  decisions the data should be of good quality. To improve the data quality data cleansing is needed. Data cleansing is fundamental to warehouse data reliability, and to data warehousing success. There are various methods for datacleansing. This paper addresses issues related data cleaning. We focus on the detection of duplicate records. Also anefficient algorithm for data cleaning is proposed. A review of data cleansing methods and comparison between them is presented.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here