z-logo
open-access-imgOpen Access
Digital Robot Judge: Building a Task-Centric Performance Database of Real-World Manipulation With Electronic Task Boards
Author(s) -
Peter So,
Andriy Sarabakha,
Fan Wu,
Utku Culha,
Fares J. Abu-Dakka,
Sami Haddadin
Publication year - 2024
Publication title -
ieee robotics and automation magazine
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.686
H-Index - 92
eISSN - 1558-223X
pISSN - 1070-9932
DOI - 10.1109/mra.2023.3336473
Subject(s) - robotics and control systems , aerospace , components, circuits, devices and systems , computing and processing , engineering profession , general topics for engineers , signal processing and analysis , transportation , power, energy and industry applications
Robotics aims to develop manipulation skills approaching human performance. However, skill complexity is often over- or underestimated based on individual experience, and the real-world performance gap is difficult or expensive to measure through in-person competitions. To bridge this gap, we propose a compact, internet-connected, electronic task board to measure manipulation performance remotely; we call it the digital robot judge or “DR.J.” By detecting key events on the board through performance circuitry, DR.J provides an alternative to transporting equipment to in-person competitions and serves as a portable test and data generation system that captures and grades performances making comparisons less expensive. Data collected are automatically published on a web dashboard that provides a living performance benchmark that can visualize improvements in real-world manipulation skills of robot platforms over time across the globe. In this paper, we share the results of a proof-of-concept electronic task board with industry-inspired tasks used in an international competition in 2021 and 2022 to benchmark localization, insertion, and disassembly tasks. We present data from 10 DR.J task boards, a method to derive Relative Task Complexity (RTC) from timing data, and compare robot solutions with a human performer. In the best case, robots performed 9× faster than humans in specialized tasks but achieved only 16% of human speed across the full set of tasks. Finally, we present the modular design, instructions, and software to replicate the electronic task board or to adapt it to new use cases to promote task-centric benchmarking.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here