z-logo
open-access-imgOpen Access
Spike neuron optimization using deep reinforcement learning
Author(s) -
Tan Hui,
Mohamad Khairi Ishak
Publication year - 2021
Publication title -
iaes international journal of artificial intelligence
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.341
H-Index - 7
eISSN - 2252-8938
pISSN - 2089-4872
DOI - 10.11591/ijai.v10.i1.pp175-183
Subject(s) - computer science , spike (software development) , reinforcement learning , artificial neural network , artificial intelligence , artificial neuron , inhibitory postsynaptic potential , neuron , neuroscience , software engineering , biology
Deep reinforcement learning (DRL) which involved reinforcement learning and artificial neural network allows agents to take the best possible actions to achieve goals. Spiking Neural Network (SNN) faced difficulty in training due to the non-differentiable spike function of spike neuron. In order to overcome the difficulty, Deep Q network (DQN) and Deep Q learning with normalized advantage function (NAF) are proposed to interact with a custom environment. DQN is applied for discrete action space whereas NAF is implemented for continuous action space. The model is trained and tested to validate its performance in order to balance the firing rate of excitatory and inhibitory population of spike neuron by using both algorithms. Training results showed both agents able to explore in the custom environment with OpenAI Gym framework. The trained model for both algorithms capable to balance the firing rate of excitatory and inhibitory of the spike neuron. NAF achieved 0.80% of the average percentage error of rate of difference between target and actual neuron rate whereas DQN obtained 0.96%. NAF attained the goal faster than DQN with only 3 steps taken for actual output neuron rate to meet with or close to target neuron firing rate.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here