AutoML.org

Freiburg-Hannover-Tübingen

Contextualize Me – The Case for Context in Reinforcement Learning

Carolin Benjamins, Theresa Eimer, Frederik Schubert, Aditya Mohan, Sebastian Döhler, André Biedenkapp, Bodo Rosenhahn, Frank Hutter and Marius Lindauer TLDR: We can model and investigate generalization in RL with contextual RL and our benchmark library CARL. In theory, without adding context we cannot achieve optimal performance and in the experiments we saw that using context […]

Read More

Hyperparameter Tuning in Reinforcement Learning is Easy, Actually

Hyperparameter Optimization tools perform well on Reinforcement Learning, outperforming Grid Searches with less than 10% of the budget. If not reported correctly, however, all hyperparameter tuning can heavily skew future comparisons.

Read More

Learning Activation Functions for Sparse Neural Networks: Improving Accuracy in Sparse Models

Authors: Mohammad Loni, Aditya Mohan, Mehdi Asadi, and Marius Lindauer TL, DR: Optimizing activation functions and hyperparameters of sparse neural networks help us squeeze more performance out of them; thus helping with deploying models in resource-constrained scenarios. We propose a 2-stage optimization pipeline to achieve this.  Motivation: Sparse Neural Networks (SNNs) – the greener and […]

Read More

Understanding AutoRL Hyperparameter Landscapes

Authors: Aditya Mohan, Carolin Benjamins, Konrad Wienecke, Alexander Dockhorn, and Marius Lindauer TL;DR: We investigate hyperparameters in RL by building landscapes of algorithm performance for different hyperparameter values at different stages of training. Using these landscapes we empirically demonstrate that adjusting hyperparameters during training can improve performance, which opens up new avenues to build better […]

Read More

Call for Datasets: OpenML 2023 Benchmark Suites

Algorithm benchmarks shine a beacon for machine learning research. They allow us, as a community, to track progress over time, identify challenging issues, to raise the bar and learn how to do better. The OpenML.org platform already serves thousands of datasets together with tasks (combination of a dataset with the target attribute, a performance metric […]

Read More

Can Fairness be Automated?

At the risk of sounding cliché, “with great power comes great responsibility.” While we don’t want to suggest that machine learning (ML) practitioners are superheroes, what was true for Spiderman is also true for those building predictive models – and even more so for those building AutoML tools. Only last year, the Netherlands Institute for […]

Read More

Zero-Shot Selection of Pretrained Models

Deep learning (DL) has celebrated many successes, but it’s still a challenge to find the right model for a given dataset — especially with a limited budget. Adapting DL models to new problems can be computationally intensive and requires comprehensive training data. On tabular data, AutoML solutions like Auto-SkLearn and AutoGluon work very well. However, […]

Read More

Wrapping Up AutoML-Conf 2022 and Introducing the 2023 Edition

The inaugural AutoML conference 2022 was an exciting adventure for us! With 170 attendees in the very first iteration, we assess this conference as a big success and are confirmed in our belief that it was the right time to transition from a workshop series to a full-fledged conference. In this blogpost, we will summarize […]

Read More

Review of the Year 2022 (Hannover)

by the AutoML Hannover Team The year 2022 was an exciting year for us. So much happened: At the Leibniz University Hannover (LUH), we founded our new institute of Artificial Intelligence AI, in short LUH|AI; Marius got tenure and was promoted to full professor; The group is further growing with our new team members Alexander […]

Read More

Learning Synthetic Environments and Reward Networks for Reinforcement Learning

In supervised learning, multiple works have investigated training networks using artificial data. For instance, in dataset distillation, the information of a larger dataset is distilled into a smaller synthetic dataset in order to improve train time. Synthetic environments (SEs) aim to apply a similar idea to Reinforcement learning (RL). They are proxies for real environments […]

Read More