z-logo
open-access-imgOpen Access
Data normalization methods to improve the quality of classification in the breast cancer diagnostic system
Author(s) -
M. V. Polyakova,
Victor Krylov
Publication year - 2022
Publication title -
applied aspects of information technologies
Language(s) - English
Resource type - Journals
eISSN - 2663-7723
pISSN - 2617-4316
DOI - 10.15276/aait.05.2022.5
Subject(s) - normalization (sociology) , pattern recognition (psychology) , artificial intelligence , principal component analysis , computer science , outlier , weighting , naive bayes classifier , classifier (uml) , support vector machine , data mining , medicine , radiology , sociology , anthropology
In oncology diagnostic systems, images of cells obtained from breast biopsy are often identified by statistical and geometricfea-tures. To classify the values of these features, presented, in particular, in the Wisconsin Diagnostic Breast Cancer dataset,a naive Bayesian classifier, the k-nearest neighbor’smethod, neural networks, and ensembles of decision trees were used in the literature. It is noticed that the classification results obtained with using these methods differ mainly within the limits of the statistical error. This is related to the selection of the classifier which is determined by the shape of the clusters and the presence of data outliers. They are significantly affected by data preparing, in particular, the method of normalization of the feature values. Normalization is defined as transforming the values of features to a certain interval. The difference in the intervals of feature values can lead to implicit weighting of features in their classification. After feature extraction and normalization, a set of data belonging to the same class may be divided into several clusters as a result of feature space distortion. To separate such data into one class, the distance between them must be greater than the internal scatter of data in each of the clusters. Therefore, in addition to normalization, data preparing can include decorrelation and orthogonalization of features, using, e.g., principal component analysiswhich selects feature projections with better class separation. So to improve the quality of classification, in the article the data preparation methods are used, namely data normalization methods and data analysis using principal components. It is shown that it is advisable to use the standard, robust, or minimax normalization of cell feature vectors if the k-nearest neighbor’sclassifier or a naive Bayesian classifier is selected. If the classification of cell feature vectors in breast biopsy images was carried out using an ensemble of decision trees, the use of normali-zation did not improve the quality of the classification.It is advisable to reduce the dimension of the feature space by analyzing the principal components only for the k-nearest method. When using a naive Bayesian classifier and ensembles of decision trees, the transition to principal components reduces the quality of the classification.The results obtainedin the articleallow choosing the pre-paring data methods for a specific problem.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here