z-logo
open-access-imgOpen Access
Enhancing the Dgree of Autonomy on a ‘Tier 1’ Unmanned Aerial Vehicle Using a Visual Landing Framework
Author(s) -
Jeffrey W. Tweedale,
Dion Gonano
Publication year - 2014
Publication title -
procedia computer science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.334
H-Index - 76
ISSN - 1877-0509
DOI - 10.1016/j.procs.2014.08.190
Subject(s) - computer science , autonomy , human–computer interaction , aeronautics , real time computing , artificial intelligence , political science , engineering , law
Humans continue to use tools to manually transform raw resources into valued outputs. The type of tool, amount of effort and form of energy required vary depending on the output; however they now enable industry to manufacture goods (with excellent quality and extremely high volume). Industry continues to invest heavily in machines so that people can operate productively. Similarly, researchers continue to pursue automation to increase the Degree of Autonomy (DOA) using Advanced Information Processing (AIP) techniques. Artificial Intelligence (AI), Computational Intelligence (CI) and Machine Intelligence (MI) now facilitate automation for numerous achievements. The proposed Visual Landing Framework (VLF) design uses a Multi-Agent System (MAS) to facilitate the development of components, that interoperate, via embedded business logic, to deliver the coordination and cooperation techniques required to automate a higher-level cognitive processing problem. As technology incorporates this ever increasing Level of Automation (LOA), humans remain in charge and are retained to make higher-order decisions. Unlike humans, heuristic and declarative logic systems suffers under these conditions and to need to adapt or make human-like decision to succeed. This paper discusses one possible avenue of enhancing the DOA on a ‘Tier 1’ Unmanned Aerial Vehicle (UAV) by reducing the need for the human to concentrate on a difficult cognitive task. The Machine to Machine (M2M) autonomy component uses an on-board camera, the Open Source Computer Vision Library (OpenCV) and Scale-Invariant Feature Transform (SIFT) algorithms to translate a fixed ground reference into positional commands. With this increased LOA the platform should be able to generate greater independence and enable more autonomous behaviour within unmanned control systems

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom