AutoML.org

Freiburg-Hannover-Tübingen

Review of the Year 2022 (Hannover)

by the AutoML Hannover Team The year 2022 was an exciting year for us. So much happened: At the Leibniz University Hannover (LUH), we founded our new institute of Artificial Intelligence AI, in short LUH|AI; Marius got tenure and was promoted to full professor; The group is further growing with our new team members Alexander […]

Read More

Learning Synthetic Environments and Reward Networks for Reinforcement Learning

In supervised learning, multiple works have investigated training networks using artificial data. For instance, in dataset distillation, the information of a larger dataset is distilled into a smaller synthetic dataset in order to improve train time. Synthetic environments (SEs) aim to apply a similar idea to Reinforcement learning (RL). They are proxies for real environments […]

Read More

Rethinking AutoML: Advancing from a Machine-Centered to Human-Centered Paradigm

In this blog post, we argue why the development of the first generation of AutoML tools ended up being less fruitful than expected and how we envision a new paradigm of automated machine learning (AutoML) that is focused on the needs and workflows of ML practitioners and data scientists. The Vision of AutoML The last […]

Read More

TabPFN: A Transformer That Solves Small Tabular Classification Problems in a Second

A radically new approach to tabular classification: we introduce TabPFN, a new tabular data classification method that takes < 1 second & yields SOTA performance (competitive with the best AutoML pipelines in an hour). So far, it is limited in scale, though: it can only tackle problems up to 1000 training examples, 100 features and […]

Read More

DEHB

DEHB: EVOLUTIONARY HYPERBAND FOR SCALABLE, ROBUST AND EFFICIENT HYPERPARAMETER OPTIMIZATION By Noor Awad, Modern machine learning algorithms crucially rely on several design decisions to achieve strong performance, making the problem of Hyperparameter Optimization (HPO) more important than ever. We believe that a practical, general HPO method must fulfill many desiderata, including: (1) strong anytime performance, […]

Read More

Deep Learning 2.0: Extending the Power of Deep Learning to the Meta-Level

Deep Learning (DL) has been able to revolutionize learning from raw data (images, text, speech, etc) by replacing domain-specific hand-crafted features with features that are jointly learned for the particular task at hand. In this blog post, I propose to take deep learning to the next level, by also jointly (meta-)learning other, currently hand-crafted, elements […]

Read More

Introducing Reproducibility Reviews

By Frank Hutter, Isabelle Guyon, Marius Lindauer and Mihaela van der Schaar (general and program chairs of AutoML-Conf 2022) Did you ever try to reproduce a paper from a top ML conference and failed to do so? You’re not alone! At AutoML-Conf (see automl.cc), we’re aiming for a higher standard: with the papers we publish […]

Read More

Announcing the Automated Machine Learning Conference 2022

Modern machine learning systems come with many design decisions (including hyperparameters, architectures of neural networks and the entire data processing pipeline), and the idea of automating these decisions gave rise to the research field of automated machine learning (AutoML). AutoML has been booming over the last decade, with hundreds of papers published each year now […]

Read More

CARL: A benchmark to study generalization in Reinforcement Learning

TL;DR: CARL is a benchmark for contextual RL (cRL). In cRL, we aim to generalize over different contexts. In CARL we saw that if we vary the context, the learning becomes more difficult, and making the context explicit can facilitate learning. CARL makes the context defining the behavior of the environment visible and configurable. This […]

Read More

HPOBench: Compare Multi-fidelity Optimization Algorithms with Ease

When researching and developing new hyperparameter optimization (HPO) methods, a good collection of benchmark problems, ideally relevant, realistic and cheap-to-evaluate, is a very valuable resource. While such collections exist for synthetic problems (COCO) or simple HPO problems (Bayesmark), to the best of our knowledge there is no such collection for multi-fidelity benchmarks. With ever-growing machine […]

Read More