z-logo
open-access-imgOpen Access
Multi‐level decision making in hierarchical multi‐agent robotic search teams
Author(s) -
Nasir Ali,
Salam Yasir,
Saleem Yasir
Publication year - 2016
Publication title -
the journal of engineering
Language(s) - English
Resource type - Journals
ISSN - 2051-3305
DOI - 10.1049/joe.2016.0076
Subject(s) - markov decision process , computer science , partially observable markov decision process , markov chain , task (project management) , process (computing) , partition (number theory) , artificial intelligence , multi agent system , markov process , machine learning , mathematical optimization , markov model , mathematics , engineering , statistics , systems engineering , combinatorics , operating system
The problem addressed in this study is that of a team of robots searching in an unknown area. The proposed solution is based on hierarchical agent architecture. Agents are formulated using Markov decision process model and search policy is calculated by solving the resulting Markov decision processes. In the proposed approach, one agent is the leader agent and remaining agents are member agents. Main duty of the leader agent is to assign the member agents to various partitions of the search space based on the information it receives. Member agents on the other hand, search in the assigned area until either the goals are achieved or their assignment is changed. There are certain advantages of the proposed approach over the existing approaches for multi‐agent search. For example, the computations within each agent have been limited to only one partition of the area at a time. Also the effective area to be explored by each agent is limited; therefore, less on‐board memory is required to keep track of how much of the search task has been completed. Furthermore, since Markov decision process (MDP) models are solved offline, the online computational requirement is reduced.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here