z-logo
open-access-imgOpen Access
Data quality improvement in data warehouse: a framework
Author(s) -
Rajiv Arora,
Payal Pahwa,
Daya Gupta
Publication year - 2017
Publication title -
int. j. data anal. tech. strateg.
Language(s) - English
DOI - 10.1504/ijdats.2017.083062
Data cleansing is an extremely imperative process which when carried out on the datasets, eliminates the inconsistency and duplicity from the data. It also handles null values or missing values in the data in an organised and proper manner thereby enhancing the quality of the data. In this paper, we use Kullback-Leibler divergence (KL-divergence) technique to eliminate duplicity in the datasets. Inconsistency, null values or missing values are also handled in the datasets. This is done by maintaining data marts which are made on the basis of test data. Accordingly, a framework for efficient data cleansing is suggested in order to make the data appropriate and proper for decision making purpose. A brief comparison of existing approaches of data cleansing have also been discussed. This comparison is based on various parameters such as prediction error, bias, mean square error, variance, mean absolute error, root mean square error, Theil statistics etc. These parameters are used by distance sum-based approach (DSA) to accomplish the task. The results obtained demonstrate the feasibility and validity of our method.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom