z-logo
open-access-imgOpen Access
PROVISIONING LARGE-SCALED DATA WITH PARAMETERIZED QUERY PLANS: A CASE STUDY
Author(s) -
Zdzisław Pólkowski,
Sambit Kumar Mishra
Publication year - 2021
Publication title -
azerbaijan journal of high performance computing
Language(s) - English
Resource type - Journals
eISSN - 2617-4383
pISSN - 2616-6127
DOI - 10.32010/26166127.2021.4.1.3.14
Subject(s) - computer science , provisioning , scalability , relevance (law) , data quality , data mining , scale (ratio) , server , data science , database , computer network , service (business) , physics , economy , quantum mechanics , political science , law , economics
In a general scenario, the approaches linked to the innovation of large-scaled data seem ordinary; the informational measures of such aspects can differ based on the applications as these are associated with different attributes that may support high data volumes high data quality. Accordingly, the challenges can be identified with an assurance of high-level protection and data transformation with enhanced operation quality. Based on large-scale data applications in different virtual servers, it is clear that the information can be measured by enlisting the sources linked to sensors networked and provisioned by the analysts. Therefore, it is very much essential to track the relevance and issues with enormous information. While aiming towards knowledge extraction, applying large-scaled data may involve the analytical aspects to predict future events. Accordingly, the soft computing approach can be implemented in such cases to carry out the analysis. During the analysis of large-scale data, it is essential to abide by the rules associated with security measures because preserving sensitive information is the biggest challenge while dealing with large-scale data. As high risk is observed in such data analysis, security measures can be enhanced by having provisioned with authentication and authorization. Indeed, the major obstacles linked to the techniques while analyzing the data are prohibited during security and scalability. The integral methods towards application on data possess a better impact on scalability. It is observed that the faster scaling factor of data on the processor embeds some processing elements to the system. Therefore, it is required to address the challenges linked to processors correlating with process visualization and scalability.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here