DACBench: Benchmarking Dynamic Algorithm Configuration
Posted on June 24, 2021 by
Dynamic Algorithm Configuration (DAC) has been shown to significantly improve algorithm performance over static or even handcrafted dynamic hyperparameter policies [Biedenkapp et al., 2020]. Most algorithms, however, are not designed with DAC in mind and have to be adapted to be controlled online. This requires a great deal of familiarity with the target algorithm as […]
Read More
AutoRL: AutoML for RL
Posted on April 19, 2021 by
Reinforcement learning (RL) has shown impressive results in a variety of applications. Well known examples include game and video game playing, robotics and, recently, “Autonomous navigation of stratospheric balloons”. A lot of the successes came about by combining the expressiveness of deep learning with the power of RL. Already on their own though, both frameworks […]
Read More
Auto-Sklearn – What happened in 2020
Posted on February 26, 2021 by
2020 is over. Time to look back at the amazing major features we introduced to Auto-Sklearn.
Read More
AutoML adoption in software engineering for machine learning
Posted on December 10, 2020 by
By Koen van der Blom, Holger Hoos, Alex Serban, Joost Visser In our global survey among teams that build ML applications, we found ample room for increased adoption of AutoML techniques. While AutoML is adopted at least partially by more than 70% of teams in research labs and tech companies, for teams in non-tech and […]
Read More
Neural Ensemble Search for Uncertainty Estimation and Dataset Shift
Posted on December 1, 2020 by
In many real world scenarios, deep learning models such as neural networks are deployed to make predictions on data coming from a shifted distribution (aka covariate shift) or out-of-distribution (OOD) data not at all represented in the training set. Examples include blurred or noisy images, unknown objects in images or videos, a new frequency band […]
Read More
Auto-PyTorch: Multi-Fidelity MetaLearning for Efficient and Robust AutoDL
Posted on November 27, 2020 by
By Auto-PyTorch is a framework for automated deep learning (AutoDL) that uses BOHB as a backend to optimize the full deep learning pipeline, including data preprocessing, network training techniques and regularization methods. Auto-PyTorch is the successor of AutoNet which was one of the first frameworks to perform this joint optimization.
Read More
NAS-Bench-301 and the Case for Surrogate NAS Benchmarks
Posted on October 9, 2020 by
The Need for Realistic NAS Benchmarks Neural Architecture Search (NAS) is a logical next step in representation learning as it removes human bias from architecture design, similar to deep learning removing human bias from feature engineering. As such, NAS has experienced rapid growth in recent years, leading to state-of-the-art performance on many tasks. However, empirical […]
Read More
Learning Step-Size Adaptation in CMA-ES
Posted on August 5, 2020 by
In a Nutshell In CMA-ES, the step size controls how fast or slow a population traverses through a search space. Large steps allow you to quickly skip over uninteresting areas (exploration), whereas small steps allow a more focused traversal of interesting areas (exploitation). Handcrafted heuristics usually trade off small and large steps given some measure […]
Read More
Playing Games with Progressive Episode Lengths
Posted on August 3, 2020 by
By A framework of ES-based limited episode’s length
Read More
Auto-Sklearn 2.0: The Next Generation
Posted on July 13, 2020 by
Since our initial release of auto-sklearn 0.0.1 in May 2016 and the publication of the NeurIPS paper “Efficient and Robust Automated Machine Learning” in 2015, we have spent a lot of time on maintaining, refactoring and improving code, but also on new research. Now, we’re finally ready to share the next version of our flagship […]
Read More