z-logo
open-access-imgOpen Access
Analyzing the BBOB Results by Means of Benchmarking Concepts
Author(s) -
Olaf Mersmann,
Mike Preuß,
Heike Trautmann,
Bernd Bischl,
Claus Weihs
Publication year - 2014
Publication title -
evolutionary computation
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.732
H-Index - 82
eISSN - 1530-9304
pISSN - 1063-6560
DOI - 10.1162/evco_a_00134
Subject(s) - benchmarking , computer science , mathematics , business , marketing
We present methods to answer two basic questions that arise when benchmarking optimization algorithms. The first one is: which algorithm is the "best" one? and the second one is: which algorithm should I use for my real-world problem? Both are connected and neither is easy to answer. We present a theoretical framework for designing and analyzing the raw data of such benchmark experiments. This represents a first step in answering the aforementioned questions. The 2009 and 2010 BBOB benchmark results are analyzed by means of this framework and we derive insight regarding the answers to the two questions. Furthermore, we discuss how to properly aggregate rankings from algorithm evaluations on individual problems into a consensus, its theoretical background and which common pitfalls should be avoided. Finally, we address the grouping of test problems into sets with similar optimizer rankings and investigate whether these are reflected by already proposed test problem characteristics, finding that this is not always the case.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom