
Workload Optimization by Horizontal Aggregation in SQL for Data Mining Analysis
Author(s) -
Prasanna M. Rathod,
Karuna G. Bagde
Publication year - 2021
Publication title -
international journal of scientific research in computer science, engineering and information technology
Language(s) - English
Resource type - Journals
ISSN - 2456-3307
DOI - 10.32628/cseit217263
Subject(s) - computer science , sql , scalability , set (abstract data type) , data mining , operator (biology) , workload , speedup , user defined function , dimension (graph theory) , database , theoretical computer science , parallel computing , programming language , query by example , information retrieval , web search query , mathematics , search engine , biochemistry , chemistry , repressor , transcription factor , gene , operating system , pure mathematics
Preparing a data set for analysis is generally the most time consuming task in a data mining project, requiring many complex SQL queries, joining tables, and aggregating columns. Existing SQL aggregations have limitations to prepare data sets because they return one column per aggregated group. In general, a significant manual effort is required to build data sets, where a horizontal layout is required. We propose simple, yet powerful, methods to generate SQL code to return aggregated columns in a horizontal tabular layout, returning a set of numbers instead of one number per row. This new class of functions is called horizontal aggregations. Horizontal aggregations build data sets with a horizontal denormalized layout (e.g., point-dimension, observation variable, instance-feature), which is the standard layout required by most data mining algorithms. We propose three fundamental methods to evaluate horizontal aggregations:? CASE: Exploiting the programming CASE construct; ? SPJ: Based on standard relational algebra operators (SPJ queries);? PIVOT: Using the PIVOT operator, which is offered by some DBMSs. Experiments with large tables compare the proposed query evaluation methods. Our CASE method has similar speed to the PIVOT operator and it is much faster than the SPJ method. In general, the CASE and PIVOT methods exhibit linear scalability, whereas the SPJ method does not. For query optimization the distance computation and nearest cluster in the k-means are based on SQL.Workload balancing is the assignment of work to processors in a way that maximizes application performance. The process of load balancing can be generalized into four basic steps:1. Monitoring processor load and state; 2. Exchanging workload and state information between processors; 3. Decision making; 4. Data migration. The decision phase is triggered when the load imbalance is detected to calculate optimal data redistribution. In the fourth and last phase, data migrates from overloaded processors to under-loaded ones.