
NEUROCOMPUTATIONAL MODELLING OF DISTRIBUTED LEARNING FROM VISUAL STIMULI
Author(s) -
Ankush Rai,
Jagadeesh Kannan R
Publication year - 2017
Publication title -
asian journal of pharmaceutical and clinical research
Language(s) - English
Resource type - Journals
eISSN - 2455-3891
pISSN - 0974-2441
DOI - 10.22159/ajpcr.2017.v10s1.19645
Subject(s) - cognition , perception , stimulus (psychology) , cognitive science , psychology , visual perception , mechanism (biology) , cognitive model , visual processing , cognitive psychology , artificial intelligence , neuroscience , computer science , philosophy , epistemology
Neurocomputational modeling of visual stimuli can lead not only to identify the neural substrates of attention but also to test cognitive theories ofattention with applications on several visual media, robotics, etc. However, there are many research works done in cognitive model for linguistics,but the studies regarding cognitive modeling of learning mechanisms for visual stimuli are falling back. Based on principles of operation cognitivefunctionalities in human vision processing, the study presents the development of a computational neurocomputational cognitive model for visualperception with detailed algorithmic descriptions. Here, four essential questions of cognition and visual attention is considered for logicallycompressing into one unified neurocomputational model: (i) Segregation of special classes of stimuli and attention modulation, (ii) relation betweengaze movements and visual perception, (iii) mechanism of selective stimulus processing and its encoding in neuronal cells, and (iv) mechanism ofvisual perception through autonomous relation proofing.