AutoML.org

Freiburg-Hannover

xAutoML: Explainable AutoML

Automating machine learning supports users, developers and researchers in developing new ML applications fast. The output of AutoML tools, however, cannot always be easily explained by human intuition or expert knowledge and thus experts sometimes lack trust in AutoML tools. Therefore we develop methods improving the transparency and explainability of AutoML systems, increasing trust in AutoML tools as well as generating valuable insights into otherwise opaque optimization processes.  Ways of explaining AutoML include:

  • Hyperparameter Importance: Which hyperparameters (or other design decisions) are globally important to improve the performance of ML systems? [Hutter et al. 2014]
  • Automatic Ablation Studies: If an AutoML tool started with a given configuration (e.g., defined by the user or the original developer of the ML algorithm at hand), which changes were important compared to the configuration returned by the AutoML tool to achieve the observed performance improvement? [Biedenkapp et al. 2017]
  • Visualization of Hyperparameter Effects: How can we visualize the effect of changing hyperparameter settings, both locally and globally? [Hutter et al. 2014, Biedenkapp et al. 2018]
  • Visualization of the Sampling Process: In which areas of the configuration space has an AutoML tool sampled when and why? Which performance can we expect there? [Biedenkapp et al. 2018]

Our xAutoML Packages

CAVE automatically generates a website report, including a variety of different xAutoML approaches by reading in the observed configurations of an AutoML run; see also here.

BOAH combines CAVE with our AutoML tools in a Jupyter notebook, allowing efficient study of different AutoML approaches and visualizations; see also here.

PyImp is the backbone of CAVE, implementing different hyperparameter importance analysis techniques.

fANOVA implements a functional ANOVA to quantify the explained variance of single hyperparameters or interaction effects; see also here