z-logo
Premium
A Modeling Approach to the Human Spatial Orientation System
Author(s) -
MERGNER T.,
BECKER W.
Publication year - 2003
Publication title -
annals of the new york academy of sciences
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.712
H-Index - 248
eISSN - 1749-6632
pISSN - 0077-8923
DOI - 10.1196/annals.1303.028
Subject(s) - computer science , orientation (vector space) , perception , action (physics) , modular design , representation (politics) , object (grammar) , actuator , nonlinear system , motion (physics) , artificial intelligence , support surface , computer vision , psychology , mathematics , geometry , physics , engineering , quantum mechanics , structural engineering , neuroscience , politics , political science , law , operating system
The human spatial orientation system is highly complex and nonlinear. It is difficult, therefore, to arrive at an unequivocal model of the underlying processing by merely combining the known elementary mechanisms (“bottom‐up” approach); additional “top‐down” concepts are required to narrow the choice between several formally equivalent solutions. We here suggest a concept in which sensorimotor control is based on a meta‐level that provides an internal representation of the physical stimuli acting upon a subject (e.g., tilt of the support surface), whereas the classic reflex concept essentially proceeds from a direct coupling between physiological stimuli, sensors and actuators. At the hypothesized meta‐level, the axial body segments are represented as a stack of superimposed platforms with the lowermost platform (generally the feet) riding on a support surface that acts as the buttress for the subject's active movements. From the sensory point of view, this stack constitutes a system of nested references. This concept explains data from various experiments dealing with self‐ and object motion perception and body stabilization in a more exhaustive way than does the classic concept. In our view, it provides a robust, flexible, and modular framework for perception and action in space.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here