CC BY-SA 4.0 from and


Meta-Learning aims to improve learning across different tasks or datasets instead of specializing on a single one. This makes meta-learning useful in a variety of tasks and applications, for example for warmstarting HPO and NAS, learning dynamic hyperparameter policies across different task instances or directly learning how to learn.

Meta-Learning in HPO & NAS

The efficiency of hyperparameter optimization and neural architecture search can be significantly improved by using meta-learning to transfer knowledge between tasks, for example learning promising areas of the search space. This can be achieved for example by using meta-features of datasets or building meta-models based on performance data from many datasets.

Examples of this can be found in:

Performance Prediction

As a lot of AutoML research is very expensive to evaluate, we develop approaches for approximating the performance of configured methods to increase efficiency and thus accessibility of AutoML in general. This increase in efficiency can either relate to AutoML optimizers or to benchmarking of AutoML approaches.

Some of our Performance Prediction research:

Algorithm Selection

In algorithm selection, we learn models that map the characteristics of a task to an algorithm in order to select the best-performing algorithm for each individual task. Meta-learning challenges in algorithm selection include finding relevant features of tasks and learning a selection function that allows reliable predictions, even with little data and noisy performance data.

Examples of our Algorithm Selection research:

Algorithm Configuration

In algorithm configuration, we develop methods for finding a configuration of well-performing hyperparameters of a given algorithm. In contrast to HPO, these configurations have to perform well on a set of tasks, instead of only on a single one. Examples for tasks in this setting include different splits of cross validation, but also problem instances in general, such as Boolean formula or MIP instances.

Our work on Algorithm Configuration includes:

Dynamic Algorithm Configuration (DAC)

DAC focuses on learning dynamic hyperparameter schedules instead of static values. As this significantly expands the search spaces compared to traditional HPO, DAC is generally also more computationally expensive, but also leverages the full potential of hyperparameter settings and thus leads even better systems. In order to efficiently learn and apply DAC models as broadly as possible, meta-learning is a key component of DAC.
DAC and related meta-learning approaches:

Learning to Learn (L2L)

In learning to learn the entire manually designed learning process is replaced by a data-driven learning approach which is learned across many different tasks, such as for example different datasets or different network architectures. For example, in contrast to a hand-designed SGD update, a L2L system would learn how to update the weights of a neural network.

Our applications of L2L: