Deep learning (DL) has celebrated many successes, but it’s still a challenge to find the right model for a given dataset — especially with a limited budget. Adapting DL models to new problems can be computationally intensive and requires comprehensive training data. On tabular data, AutoML solutions like Auto-SkLearn and AutoGluon work very well. However, there is no equivalent that allows the selection of pretrained deep models in vision or natural language processing, along with the right finetuning hyperparameters. We combine AutoML, meta-learning, and pretrained models to offer two different solutions that allow automatically selecting the best DL pipeline based on dataset characteristics: ZAP and Quick-Tune.
Zero-shot AutoML with Pretrained Models (ZAP)
Quick-Tune: Quickly Learning Which Pretrained Model to Finetune and How
Related to ZAP, Quick-Tune also proposes an efficient method to select an appropriate model and its finetuning hyperparameters but through a meta-learned multi-fidelity performance predictor. In contrast to ZAP, Quick-Tune uses also the training learning curves to arrive at a sophisticated performance predictor. We empirically show that our resulting approach can quickly select a high-performing pretrained model out of a model hub consisting of 24 deep learning models for a new dataset together with its optimal hyperparameters.
You can read all the details in our full arxiv paper (currently under review at NeurIPS 2023).