Progress in practical Hyperparameter Tuning is often hampered by the fact that there are no standardized benchmark problems. To alleviate this problem we introduce a library called HPOlib. HPOlib provides a unified interface to synthetic benchmark functions and machine learning tasks and is available on github:
The above website is the documentation of HPOlib2. HPOlib2 is a rewrite of HPOlib, concentrating on the main issue in hyperparameter optimization, namely benchmarks. The old HPOlib website is still available:
Findings are published in NIPS Workshop on Bayesian Optimization in Theory and Practice (BayesOpt ’13):
Towards an Empirical Foundation for Assessing Bayesian Optimization of Hyperparameters [pdf] [bib] [poster]
This includes results for SMAC, spearmint and TPE on the benchmarks we provide.