z-logo
open-access-imgOpen Access
Competence region estimation for black-box surrogate models
Author(s) -
Tapan Shah
Publication year - 2021
Publication title -
proceedings of the ... international florida artificial intelligence research society conference
Language(s) - English
Resource type - Journals
eISSN - 2334-0762
pISSN - 2334-0754
DOI - 10.32473/flairs.v34i1.128571
Subject(s) - dither , quantization (signal processing) , rounding , computer science , algorithm , linde–buzo–gray algorithm , artificial intelligence , machine learning , noise shaping , computer vision , operating system
With advances in edge applications for industry andhealthcare, machine learning models are increasinglytrained on the edge. However, storage and memory in-frastructure at the edge are often primitive, due to costand real-estate constraints. A simple, effective methodis to learn machine learning models from quantized datastored with low arithmetic precision (1-8 bits). In thiswork, we introduce two stochastic quantization meth-ods, dithering and stochastic rounding. In dithering, ad-ditive noise from a uniform distribution is added tothe sample before quantization. In stochastic rounding,each sample is quantized to the upper level with prob-ability p and to a lower level with probability 1-p. Thekey contributions of the paper are  For 3 standard machine learning models, Support Vec-tor Machines, Decision Trees and Linear (Logistic)Regression, we compare the performance loss for astandard static quantization and stochastic quantiza-tion for 55 classification and 30 regression datasetswith 1-8 bits quantization. We showcase that for 4- and 8-bit quantization overregression datasets, stochastic quantization demon-strates statistically significant improvement. We investigate the performance loss as a function ofdataset attributes viz. number of features, standard de-viation, skewness. This helps create a transfer functionwhich will recommend the best quantizer for a givendataset. We propose 2 future research areas, a) dynamic quan-tizer update where the model is trained using stream-ing data and the quantizer is updated after each batchand b) precision re-allocation under budget constraintswhere different precision is used for different features.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here