Automating machine learning supports users, developers and researchers in developing new ML applications fast. The output of AutoML tools, however, cannot always be easily explained by human intuition or expert knowledge and thus experts sometimes lack trust in AutoML tools. Therefore we develop methods improving the transparency and explainability of AutoML systems, increasing trust in AutoML tools as well as generating valuable insights into otherwise opaque optimization processes. Ways of explaining AutoML include:
- Hyperparameter Importance: Which hyperparameters (or other design decisions) are globally important to improve the performance of ML systems? [Hutter et al. 2014]
- Automatic Ablation Studies: If an AutoML tool started with a given configuration (e.g., defined by the user or the original developer of the ML algorithm at hand), which changes were important compared to the configuration returned by the AutoML tool to achieve the observed performance improvement? [Biedenkapp et al. 2017]
- Visualization of Hyperparameter Effects: How can we visualize the effect of changing hyperparameter settings, both locally and globally? [Hutter et al. 2014, Biedenkapp et al. 2018, Moosbauer et al. 2021]
- Visualization of the Sampling Process: In which areas of the configuration space has an AutoML tool sampled when and why? Which performance can we expect there? [Biedenkapp et al. 2018]
Complementary to get insights from AutoML is the interaction with AutoML to provide further guidance from the users on how to find good solutions. For example, some experts developed an intuition for good regions of hyperparameter settings.
- Human Prior knowledge on the optimum: How can Bayesian Optimization make efficient use of users’ prior knowledge on the optimum to guide its search? [Souza et al. 2021]