Hyperparameter optimization is a powerful approach to achieve the best performance on many different problems. However classical approaches to solve this problem all ignore the iterative nature of many algorithms. Dynamic algorithm configuration (DAC) is capable of generalizing over prior optimization approaches, as well as handling optimization of hyperparameters that need to be adjusted over multiple time-steps. To allow us to use this framework, we need to move from the classical view of algorithms as a black-box to more of a gray or even white-box view to unleash the full potential of AI algorithms with DAC.
References
- G. Shala and A. Biedenkapp and N. Awad and S. Adriaensen and M. Lindauer and F. Hutter
Learning Step-Size Adaptation in CMA-ES
In: Proceedings of the Sixteenth International Conference on Parallel Problem Solving from Nature (PPSN’20)
Link to source code and data as well as trained policies and accompanying blog post.
Link to the video poster presentation at PPSN’20. - D. Speck and A. Biedenkapp and F. Hutter and R. Mattmüller and M. Lindauer
Learning Heuristic Selection with Dynamic Algorithm Configuration
In: Workshop on Bridging the Gap Between AI Planning and Reinforcement Learning (PRL@ICAPS’20)
Link to source code and data of using DAC to switch heuristics in AI planning.
Link to the video presentation at PRL@ICAPS’20. - A. Biedenkapp and H. F. Bozkurt and T. Eimer and F. Hutter and M. Lindauer
Dynamic Algorithm Configuration: Foundation of a New Meta-Algorithmic Framework
In: Proceedings of the Twenty-fourth European Conference on Artificial Intelligence (ECAI’20)
Link to source code of usage of DAC on artificial benchmarks. Link to accompanying blog post.
Link to the video presentation at ECAI’20. - A. Biedenkapp and H. F. Bozkurt and F. Hutter and M. Lindauer
Towards White-box Benchmarks for Algorithm Control
In: DSO Workshop@IJCAI’19
Note: In this early work we referred to DAC as “Algorithm Control”