z-logo
open-access-imgOpen Access
Embedding vision‐based advanced driver assistance systems: a survey
Author(s) -
Velez Gorka,
Otaegui Oihana
Publication year - 2017
Publication title -
iet intelligent transport systems
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.579
H-Index - 45
eISSN - 1751-9578
pISSN - 1751-956X
DOI - 10.1049/iet-its.2016.0026
Subject(s) - advanced driver assistance systems , automation , key (lock) , computer science , bridging (networking) , embedding , field (mathematics) , software , lidar , systems engineering , artificial intelligence , human–computer interaction , computer security , engineering , mechanical engineering , remote sensing , mathematics , pure mathematics , programming language , geology
Automated driving will have a big impact on society, creating new possibilities for mobility and reducing road accidents. Current developments aim to provide driver assistance in the form of conditional and partial automation. Computer vision, either alone or combined with other technologies such as radar or lidar, is one of the key technologies of advanced driver assistance systems (ADAS). The presence of vision technologies inside the vehicles is expected to grow as the automation levels increase. However, embedding a vision‐based driver assistance system supposes a big challenge due to the special features of vision algorithms, the existing constrains and the strict requirements that need to be fulfilled. The aim of this study is to show the current progress and future directions in the field of vision‐based embedded ADAS, bridging the gap between theory and practice. The different hardware and software options are reviewed, and design, development and testing considerations are discussed. Additionally, some outstanding challenges are also identified.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here