AutoML.org

Freiburg-Hannover-Tübingen

Incorporating Structure in Deep Reinforcement Learning

Authors: Aditya Mohan, Amy Zhang, Marius Lindauer Deep Reinforcement Learning (RL) has significantly advanced in various fields, from playing complex games to controlling robotic systems. However, its application in real-world scenarios still faces numerous challenges, such as poor data efficiency, limited generalization, and a lack of safety guarantees. This work provides a comprehensive overview of […]

Read More

Position: A Call to Action for a Human-Centered AutoML Paradigm

Paper Authors Marius Lindauer, Florian Karl, Anne Klier, Julia Moosbauer, Alexander Tornede, Andreas Mueller, Frank Hutter, Matthias Feurer, Bernd Bischl Motivation Automated Machine Learning (AutoML) has significantly transformed the machine learning landscape by automating the creation and optimization of ML models. This has opened up a path towards democratized ML, making it accessible to a […]

Read More

Review of the Year 2023 – AutoML Hannover

by the AutoML Hannover Team The year 2023 was the most successful for us as a (still relatively young) AutoML group in Hannover. With the start of several big projects, including the ERC starting grant on interactive and explainable AutoML and a BMUV-funded project on Green AutoML, the group has grown and we were able […]

Read More

New Horizons in Parameter Regularization: A Constraint Approach

Authors: Jörg Franke, Michael Hefenbrock, Gregor Koehler, Frank Hutter Introduction We present in our recent preprint a novel approach to parameter regularization for deep learning: Constrained Parameter Regularization (CPR). It is an alternative to traditional weight decay. Instead of applying a constant penalty uniformly to all parameters, we enforce an upper bound on a statistical […]

Read More

Rethinking Performance Measures of RNA Secondary Structure Problems

TL;DR In our NeurIPS workshop paper , we analyze different performance measures for the evaluation of RNA secondary structure prediction algorithms, showing that commonly used measures are flawed in certain settings. We then propose the Weisfeiler-Lehman graph kernel as a competent measure for performance assessment in the field. RNA Secondary Structure Prediction Ribonucleic acid (RNA) […]

Read More

LC-PFN: Efficient Bayesian Learning Curve Extrapolation using Prior-Data Fitted Networks

Authors: Steven Adriaensen*, Herilalaina Rakotoarison*, Samuel Müller, and Frank Hutter TL;DR In our paper, we propose LC-PFN, a novel method for Bayesian learning curve extrapolation. LC-PFN is a prior-data-fitted network (PFN), a transformer trained on synthetic learning curve data capable of doing Bayesian learning curve extrapolation in a single forward pass. We show that our […]

Read More

Construction of Hierarchical Neural Architecture Search Spaces based on Context-free Grammars

Authors: Simon Schrodi, Danny Stoll, Binxin Ru, Rhea Sanjay Sukthanker, Thomas Brox, and Frank Hutter TL;DR We take a functional view of neural architecture search that allows us to construct highly expressive search spaces based on context-free grammars, and show that we can efficiently find well-performing architectures. NAS is great, but… The neural architecture plays […]

Read More

Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face Recognition

Deep learning is applied to a wide variety of socially-consequential domains, e.g., credit scoring, fraud detection, hiring decisions, criminal recidivism, loan repayment, and face recognition, with many of these applications impacting the lives of people more than ever — often in biased ways. Dozens of formal definitions of fairness have been proposed, and many algorithmic […]

Read More

Symbolic Explanations for Hyperparameter Optimization

Authors:  Sarah Segel, Helena Graf, Alexander Tornede, Bernd Bischl, and Marius Lindauer TL;DR We propose to apply symbolic regression in a hyperparameter optimization setting to obtain explicit formulas providing simple and interpretable explanations of the effects of hyperparameters on the model performance. HPO is great, but… In the field of machine learning, hyperparameter optimization (HPO) […]

Read More

Self-Adjusting Bayesian Optimization with SAWEI

By Carolin Benjamins, Elena Raponi, Anja Jankovic, Carola Doerr and Marius Lindauer TLDR: In BO: We self-adjust the exploration-exploitation trade-off online in the acquisition function, adapting to any problem landscape. Motivation Bayesian optimization (BO) encompasses a class of surrogate-based, sample-efficient algorithms for optimizing black-box problems with small evaluation budgets. However, BO itself has numerous design […]

Read More