What are EPMs?
Empirical performance models are regression models that characterize a given algorithm’s performance across problem instances and/or parameter settings. These models can predict the performance of algorithms on previously unseen input, including previously unseen problem instances and or previously untested parameter settings and are useful for analyzing of how an algorithm performs under different conditions, select promising configurations for a new problem instance, or surrogate benchmarks.
Efficient Benchmarking Using Surrogate Benchmarks
To enable an efficient comparison of different configuration algorithms (including algorithm configuration and hyperparameter optimization) surrogate benchmarks can be used. We replace the costly evaluation of the real target algorithm with a prediction of an EPM. With this method we can reduce the time required to evaluate one configuration to less than one second and allow extensive, but computationally feasible, empirical comparisons.
K. Eggensperger, M. Lindauer, H. H. Hoos, F. Hutter and K. Leyton-Brown
Efficient Benchmarking of Algorithm Configurators via Model-Based Surrogates
Machine Learning (2018)
A Python implementation to create surrogate benchmarks for algorithm configuration can be found on this Website (external). Data used to train EPMs can be downloaded from here. Each .tar.gz contains performance data, additional information and a short description of file formats:
- (MIP) CPLEX on Regions200
- (MIP) CPLEX on RCW
- (SAT) Probsat on 7SAT90
- (SAT) Minisat on randomK3
- (SAT) Clasp on Rooks
- (SAT) Lingeling on Circuitfuzz
- (Planning) LPG on zenotravel
- (Planning) LPG on satellite
- (ASP) Clasp on weighted
- (ML) XGBoost on covertype
- (ML) SVM on MNIST
Previous work on EPMs (including a matlab implementation) can be found here (external website)