AutoML.org

Freiburg-Hannover-Tübingen

DACBench: Benchmarking Dynamic Algorithm Configuration

Dynamic Algorithm Configuration (DAC) has been shown to significantly improve algorithm performance over static or even handcrafted dynamic hyperparameter policies [Biedenkapp et al., 2020]. Most algorithms, however, are not designed with DAC in mind and have to be adapted to be controlled online. This requires a great deal of familiarity with the target algorithm as […]

Read More

AutoRL: AutoML for RL

Reinforcement learning (RL) has shown impressive results in a variety of applications. Well known examples include game and video game playing, robotics and, recently, “Autonomous navigation of stratospheric balloons”. A lot of the successes came about by combining the expressiveness of deep learning with the power of RL. Already on their own though, both frameworks […]

Read More

Auto-Sklearn – What happened in 2020

2020 is over. Time to look back at the amazing major features we introduced to Auto-Sklearn.

Read More

AutoML adoption in software engineering for machine learning

By Koen van der Blom, Holger Hoos, Alex Serban, Joost Visser In our global survey among teams that build ML applications, we found ample room for increased adoption of AutoML techniques. While AutoML is adopted at least partially by more than 70% of teams in research labs and tech companies, for teams in non-tech and […]

Read More

Neural Ensemble Search for Uncertainty Estimation and Dataset Shift

In many real world scenarios, deep learning models such as neural networks are deployed to make predictions on data coming from a shifted distribution (aka covariate shift) or out-of-distribution (OOD) data not at all represented in the training set. Examples include blurred or noisy images, unknown objects in images or videos, a new frequency band […]

Read More

Auto-PyTorch: Multi-Fidelity MetaLearning for Efficient and Robust AutoDL

By Auto-PyTorch is a framework for automated deep learning (AutoDL) that uses BOHB as a backend to optimize the full deep learning pipeline, including data preprocessing, network training techniques and regularization methods. Auto-PyTorch is the successor of AutoNet which was one of the first frameworks to perform this joint optimization.

Read More

NAS-Bench-301 and the Case for Surrogate NAS Benchmarks

The Need for Realistic NAS Benchmarks Neural Architecture Search (NAS) is a logical next step in representation learning as it removes human bias from architecture design, similar to deep learning removing human bias from feature engineering. As such, NAS has experienced rapid growth in recent years, leading to state-of-the-art performance on many tasks. However, empirical […]

Read More

Learning Step-Size Adaptation in CMA-ES

In a Nutshell In CMA-ES, the step size controls how fast or slow a population traverses through a search space. Large steps allow you to quickly skip over uninteresting areas (exploration), whereas small steps allow a more focused traversal of interesting areas (exploitation). Handcrafted heuristics usually trade off small and large steps given some measure […]

Read More

Playing Games with Progressive Episode Lengths

By A framework of ES-based limited episode’s length

Read More

Auto-Sklearn 2.0: The Next Generation

Since our initial release of auto-sklearn 0.0.1 in May 2016 and the publication of the NeurIPS paper “Efficient and Robust Automated Machine Learning” in 2015, we have spent a lot of time on maintaining, refactoring and improving code, but also on new research. Now, we’re finally ready to share the next version of our flagship […]

Read More