Neural Architecture Search


Neural Architecture Search (NAS) automates the process of architecture design of neural networks.  NAS approaches optimize the topology of the networks, incl. how to connect nodes and which operators to choose. User-defined optimization metrics can thereby include accuracy, model size or inference time to arrive at an optimal architecture for specific applications. Due to the extremely large search space, traditional evolution or reinforcement learning-based AutoML algorithms tend to be computationally expensive. Hence recent research on the topic has focused on exploring more efficient ways for NAS. In particular, recently developed gradient-based and multi-fidelity methods have provided a promising path and boosted research in these directions.

Literature on NAS

NAS is one of the booming subfields of AutoML and the number of papers is quickly increasing. To provide a comprehensive overview of the recent trends, we provide the following sources:


Based on the well-known DL framework PyTorch, Auto-PyTorch automatically optimizes both the neural architecture and the hyperparameter configuration. To this end, Auto-PyTorch combines ideas from efficient multi-fidelity optimization, meta-learning and ensembling.


NASLib is a Neural Architecture Search (NAS) library. Its purpose is to facilitate NAS research for the community by providing interfaces to several state-of-the-art NAS search spaces.

Benchmarks for NAS

Research on NAS is often very expensive because training and evaluating a single deep neural network might require between minutes or even days. Therefore, we provide several benchmark packages for NAS that either provide tabular or surrogate benchmarks, allowing efficient research on NAS.

Best practices for NAS Research

The rapid development of new NAS approaches makes it hard to compare these against each other. To ensure reliable and reproducible results, we also provide best practices for scientific research on NAS and our checklist for new NAS papers.