z-logo
open-access-imgOpen Access
Simplified quantised kernel least mean square algorithm with fixed budget
Author(s) -
Wang Shiyuan,
Zheng Yunfei,
Duan Shukai,
Wang Lidan
Publication year - 2016
Publication title -
electronics letters
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.375
H-Index - 146
eISSN - 1350-911X
pISSN - 0013-5194
DOI - 10.1049/el.2016.1799
Subject(s) - kernel (algebra) , algorithm , gradient descent , mathematics , mean squared error , computational complexity theory , mathematical optimization , minimum mean square error , square (algebra) , computer science , statistics , artificial neural network , combinatorics , geometry , artificial intelligence , estimator
Quantised kernel least mean square algorithm with fixed budget (QKLMS‐FB) is an effective method for constraining the final network size of QKLMS at the cost of less accuracy loss. However, the significances of all centres in the dictionary are required to be calculated at each iteration, which will lead to linear increasing in the computational complexity of QKLMS‐FB with the centre number. To reduce the computational cost and retain a better accuracy simultaneously, only the coefficient vector and influence factor are incorporated to measure the significance of each centre, thereby generating a novel simplified QKLMS‐FB (SQKLMS‐FB). In addition, the gradient descent method is applied in the SQKLMS‐FB to update the coefficient of the closest centre for accuracy improvement. Simulations both in stationary and non‐stationary cases validate the proposed SQKLMS‐FB.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here