Premium
SteerBench: a benchmark suite for evaluating steering behaviors
Author(s) -
Singh Shawn,
Kapadia Mubbasir,
Faloutsos Petros,
Reinman Glenn
Publication year - 2009
Publication title -
computer animation and virtual worlds
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.225
H-Index - 49
eISSN - 1546-427X
pISSN - 1546-4261
DOI - 10.1002/cav.277
Subject(s) - benchmark (surveying) , computer science , suite , set (abstract data type) , process (computing) , task (project management) , test suite , machine learning , human–computer interaction , artificial intelligence , test case , systems engineering , engineering , history , programming language , regression analysis , archaeology , geodesy , geography , operating system
Steering is a challenging task, required by nearly all agents in virtual worlds. There is a large and growing number of approaches for steering, and it is becoming increasingly important to ask a fundamental question: how can we objectively compare steering algorithms? To our knowledge, there is no standard way of evaluating or comparing the quality of steering solutions. This paper presents SteerBench : a benchmark framework for objectively evaluating steering behaviors for virtual agents. We propose a diverse set of test cases, metrics of evaluation, and a scoring method that can be used to compare different steering algorithms. Our framework can be easily customized by a user to evaluate specific behaviors and new test cases. We demonstrate our benchmark process on two example steering algorithms, showing the insight gained from our metrics. We hope that this framework can grow into a standard for steering evaluation. Copyright © 2009 John Wiley & Sons, Ltd.