Maintained by Difan Deng and Marius Lindauer.
The following list considers papers related to neural architecture search. It is by no means complete. If you miss a paper on the list, please let us know.
Please note that although NAS methods steadily improve, the quality of empirical evaluations in this field are still lagging behind compared to other areas in machine learning, AI and optimization. We would therefore like to share some best practices for empirical evaluations of NAS methods, which we believe will facilitate sustained and measurable progress in the field. If you are interested in a teaser, please read our blog post or directly jump to our checklist.
Transformers have gained increasing popularity in different domains. For a comprehensive list of papers focusing on Neural Architecture Search for Transformer-Based spaces, the awesome-transformer-search repo is all you need.
2025
Randive, Pallavi; Bhagat, Madhuri S.; Bhorkar, Mangesh P.; Bhagat, Rajesh M.; Vinchurkar, Shilpa M.; Shelare, Sagar; Sharma, Shubham; Beemkumar, N.; Hemalatha, S.; Kumar, Parveen; Kedia, Ankit; Massoud, Ehab El Sayed; Gupta, Deepak; Lozanovic, Jasmina
Adaptive optimization of natural coagulants using hybrid machine learning approach for sustainable water treatment Journal Article
In: nature scientific reports , 2025.
@article{nokey,
title = {Adaptive optimization of natural coagulants using hybrid machine learning approach for sustainable water treatment},
author = {
Pallavi Randive and Madhuri S. Bhagat and Mangesh P. Bhorkar and Rajesh M. Bhagat and Shilpa M. Vinchurkar and Sagar Shelare and Shubham Sharma and N. Beemkumar and S. Hemalatha and Parveen Kumar and Ankit Kedia and Ehab El Sayed Massoud and Deepak Gupta and Jasmina Lozanovic
},
url = {https://www.nature.com/articles/s41598-025-96750-9},
year = {2025},
date = {2025-05-02},
urldate = {2025-05-02},
journal = {nature scientific reports },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Alagoz, Baris Baykant; Keles, Cemal; Ates, Abdullah; Özdemir, Edanur; Alkhulaifi, Nasser
Optimal deep neural network architecture design with improved generalization for data-driven cooling load estimation problem Journal Article
In: Neural Computing and Applications, 2025.
@article{nokey,
title = {Optimal deep neural network architecture design with improved generalization for data-driven cooling load estimation problem},
author = {
Baris Baykant Alagoz and Cemal Keles and Abdullah Ates and Edanur Özdemir and Nasser Alkhulaifi
},
url = {https://link.springer.com/article/10.1007/s00521-025-11212-7},
year = {2025},
date = {2025-05-02},
urldate = {2025-05-02},
journal = {Neural Computing and Applications},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Liu, Xukun; Lv, Haoze; Ma, Fenglong; Wang, Chi; Xu, Dongkuan (DK)
DyESP: Accelerating Hyperparameter-Architecture Search via Dynamic Exploration and Space Pruning Journal Article
In: Proceedings of the AAAI Symposium Series, vol. 5, no. 1, pp. 172-179, 2025.
@article{Liu_Lv_Ma_Wang_Xu_2025,
title = {DyESP: Accelerating Hyperparameter-Architecture Search via Dynamic Exploration and Space Pruning},
author = {Xukun Liu and Haoze Lv and Fenglong Ma and Chi Wang and Dongkuan (DK) Xu},
url = {https://ojs.aaai.org/index.php/AAAI-SS/article/view/35585},
doi = {10.1609/aaaiss.v5i1.35585},
year = {2025},
date = {2025-05-01},
urldate = {2025-05-01},
journal = {Proceedings of the AAAI Symposium Series},
volume = {5},
number = {1},
pages = {172-179},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Gong, Tao; Wang, Liang; Ma, Yongjie
Evolutionary neural architecture search based on a modified particle swarm optimization Journal Article
In: International Journal of Machine Learning and Cybernetics , 2025.
@article{nokey,
title = {Evolutionary neural architecture search based on a modified particle swarm optimization},
author = {
Tao Gong and Liang Wang and Yongjie Ma
},
url = {https://link.springer.com/article/10.1007/s13042-025-02686-x},
year = {2025},
date = {2025-05-01},
urldate = {2025-05-01},
journal = { International Journal of Machine Learning and Cybernetics },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Su, Jun; Tang, Chaolong; Liu, Zhiquan
A Prediction Optimization Method with Federated Learning and Neural Architecture Search for Distributed Renewable Energy Sources Journal Article
In: Distributed Generation & Alternative Energy Journal, vol. 40, no. 02, pp. 213–238, 2025.
@article{Su_Tang_Liu_2025,
title = {A Prediction Optimization Method with Federated Learning and Neural Architecture Search for Distributed Renewable Energy Sources},
author = {Jun Su and Chaolong Tang and Zhiquan Liu},
url = {https://journals.riverpublishers.com/index.php/DGAEJ/article/view/27029},
doi = {10.13052/dgaej2156-3306.4021},
year = {2025},
date = {2025-05-01},
urldate = {2025-05-01},
journal = {Distributed Generation & Alternative Energy Journal},
volume = {40},
number = {02},
pages = {213–238},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Meyer-Lee, Gabriel
Towards Practical Automation of Neural Architecture Design PhD Thesis
2025.
@phdthesis{nokey,
title = { Towards Practical Automation of Neural Architecture Design},
author = {Gabriel Meyer-Lee},
url = {https://scholarworks.uvm.edu/graddis/2067/},
year = {2025},
date = {2025-05-01},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
Melo, Thadeu Pezzin; Andrade, Jefferson Oliveira; Komati, Karin Satie
A Pipeline for Multivariate Time Series Forecasting of Gas Consumption in Pelletization Process Journal Article
In: CLEIej, 2025.
@article{nokey,
title = { A Pipeline for Multivariate Time Series Forecasting of Gas Consumption in Pelletization Process },
author = { Thadeu Pezzin Melo and Jefferson Oliveira Andrade and Karin Satie Komati },
url = {https://doi.org/10.19153/cleiej.28.3.2 },
year = {2025},
date = {2025-05-01},
urldate = {2025-05-01},
journal = {CLEIej},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Ranjan, Vivek; Pal, Riya; Saini, Prashant; Nirala, Divyanshu Raj
Neural Architecture Search: Designing Automated AI Models Miscellaneous
2025.
@misc{vivek_ranjan_2025_15423210,
title = {Neural Architecture Search: Designing Automated AI Models},
author = {Vivek Ranjan and Riya Pal and Prashant Saini and Divyanshu Raj Nirala},
url = {https://doi.org/10.5281/zenodo.15423210},
doi = {10.5281/zenodo.15423210},
year = {2025},
date = {2025-05-01},
urldate = {2025-05-01},
publisher = {Institute for Interdisciplinary and Venture Publication},
keywords = {},
pubstate = {published},
tppubtype = {misc}
}
M, V T Ram Pavan Kumar; Shieh, Chin-Shiuh; S, Siva Shankar; Chakrabarti, Prasun
PNASFH-Net: Parallel NAS forward harmonic Network for colon cancer detection using CT images Journal Article
In: Future Technology, vol. 4, no. 2, pp. 76–91, 2025.
@article{KumarM_Shieh_ShankarS_Chakrabarti_2025,
title = {PNASFH-Net: Parallel NAS forward harmonic Network for colon cancer detection using CT images},
author = {V T Ram Pavan Kumar M and Chin-Shiuh Shieh and Siva Shankar S and Prasun Chakrabarti},
url = {https://fupubco.com/futech/article/view/317},
year = {2025},
date = {2025-05-01},
urldate = {2025-05-01},
journal = {Future Technology},
volume = {4},
number = {2},
pages = {76–91},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Chang, Lei; Rani, Shalli; Akbar, Muhammad Azeem
ChampionNet: a transformer-enhanced neural architecture search framework for athletic performance prediction and training optimization Journal Article
In: Discover Computing , 2025.
@article{nokey,
title = {ChampionNet: a transformer-enhanced neural architecture search framework for athletic performance prediction and training optimization},
author = {
Lei Chang and Shalli Rani and Muhammad Azeem Akbar
},
url = {https://link.springer.com/article/10.1007/s10791-025-09560-y},
year = {2025},
date = {2025-05-01},
urldate = {2025-05-01},
journal = {Discover Computing },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
alshahrani, Rami Ayied; Khanzada, Tariq Jamil Saifullah
Improved Crime Prediction using Hybrid Neural Architecture Search together with Hyper-parameter Tuning Technical Report
2025.
@techreport{nokey,
title = {Improved Crime Prediction using Hybrid Neural Architecture Search together with Hyper-parameter Tuning},
author = {Rami Ayied alshahrani and Tariq Jamil Saifullah Khanzada},
url = {https://www.researchsquare.com/article/rs-6079106/v1},
year = {2025},
date = {2025-05-01},
urldate = {2025-05-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Lan, Guojun; Tang, Jian; Chen, Jie; Xing, Jingshu; Zhao, Lijun
An effective dual-predictor controller mechanism using neural architecture search for optimization of residential energy hub system Journal Article
In: Discover Computing , vol. 28, 2025.
@article{nokey,
title = {An effective dual-predictor controller mechanism using neural architecture search for optimization of residential energy hub system},
author = {Guojun Lan and Jian Tang and Jie Chen and Jingshu Xing and Lijun Zhao
},
url = {https://link.springer.com/article/10.1007/s10791-025-09533-1},
year = {2025},
date = {2025-04-09},
urldate = {2025-04-09},
journal = {Discover Computing },
volume = {28},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Castagnetti, Andrea; Pegatoquet, Alain; Miramond, Benoit; Montfort, Olivier; Huard, Vincent
Hardware-Aware Neural Architecture Search for~Memory constrained Embedded~Neural Networks~Accelerators Proceedings Article
In: 7th IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS 2025), pp. 5, Bordeaux, France, 2025.
@inproceedings{castagnetti:hal-05052083,
title = {Hardware-Aware Neural Architecture Search for~Memory constrained Embedded~Neural Networks~Accelerators},
author = {Andrea Castagnetti and Alain Pegatoquet and Benoit Miramond and Olivier Montfort and Vincent Huard},
url = {https://hal.science/hal-05052083},
year = {2025},
date = {2025-04-01},
urldate = {2025-04-01},
booktitle = {7th IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS 2025)},
pages = {5},
address = {Bordeaux, France},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Wu, Yue; Huang, Lin; Yang, Tiejun
Breast Ultrasound Image Segmentation Using Multi-branch Skip Connection Search Journal Article
In: Journal of Imaging Informatics in Medicine , 2025.
@article{wu-jiim25a,
title = {Breast Ultrasound Image Segmentation Using Multi-branch Skip Connection Search},
author = { Yue Wu and Lin Huang and Tiejun Yang
},
url = {https://link.springer.com/article/10.1007/s10278-025-01487-6},
year = {2025},
date = {2025-04-01},
urldate = {2025-04-01},
journal = {Journal of Imaging Informatics in Medicine },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
CAPELLO, ALESSIO
An End-to-End Edge-Computing Framework for IoT-enabled Monitoring PhD Thesis
2025.
@phdthesis{nokey,
title = {An End-to-End Edge-Computing Framework for IoT-enabled Monitoring},
author = { CAPELLO, ALESSIO },
url = {https://tesidottorato.depositolegale.it/handle/20.500.14242/199679?mode=simple},
year = {2025},
date = {2025-04-01},
urldate = {2025-04-01},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
Toe, Seb Gregory Dal; Tiddeman, Bernard; Zarges, Christine
Evolutionary Neural Architecture Search using Random Weight Distributions Proceedings Article
In: Ochoa, Gabriela (Ed.): Proceedings of the Genetic and Evolutionary Computation Conference Companion, GECCO 2025, Málaga, Spain, July 14-18, 2025, Association for Computing Machinery (ACM), United States of America, 2025, (GECCO 2025 : The Genetic and Evolutionary Computation Conference, GECCO ; Conference date: 14-07-2025 Through 18-07-2025).
@inproceedings{581d1ec7c5934c3db7cf4e0f5c2b67f1,
title = {Evolutionary Neural Architecture Search using Random Weight Distributions},
author = {Seb Gregory Dal Toe and Bernard Tiddeman and Christine Zarges},
editor = {Gabriela Ochoa},
url = {https://gecco-2025.sigevo.org/HomePage},
doi = {10.1145/3712255.3726664},
year = {2025},
date = {2025-03-19},
urldate = {2025-03-19},
booktitle = {Proceedings of the Genetic and Evolutionary Computation Conference Companion, GECCO 2025, Málaga, Spain, July 14-18, 2025},
publisher = {Association for Computing Machinery (ACM)},
address = {United States of America},
abstract = {We consider the problem of efficiently searching for high-performing neural architectures whilst simultaneously favouring networks of reduced complexity. It is theorised that a complementary set of proxies can be employed for multi-objective optimisation to balance model performance with the size of the network. We demonstrate that a low-cost proxy for the test accuracy of a candidate architecture can be derived from a series of inferences alone. The proxy is paired with a complexity metric based on the number of parameters in the model and the two properties are used in a multi-objective setting. A Pareto Archived Evolutionary Strategy is used to optimise the two objectives simultaneously and deliver a diverse collection of solutions as output. This method is shown to successfully discover low-complexity architectures with minor loss of accuracy as compared to the global optima and does so with statistical reliability. This work offers a proof-of-concept Neural Architecture Search algorithm that removes training from the process entirely. The proposed approach is examined in terms of search behaviour and the complexity reduction that can be achieved by comparing discovered solutions to the top-performing architectures in the search space.},
note = {GECCO 2025 : The Genetic and Evolutionary Computation Conference, GECCO ; Conference date: 14-07-2025 Through 18-07-2025},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Tran, Thanh Hai; Nguyen, Dac Tam; Ngo, Minh Duc; Doan, Long; Luong, Ngoc Hoang; Binh, Huynh Thi Thanh
Kernelshap-nas: a shapley additive explanatory approach for characterizing operation influences Journal Article
In: Neural Computing and Applications , 2025.
@article{nokey,
title = {Kernelshap-nas: a shapley additive explanatory approach for characterizing operation influences},
author = {Thanh Hai Tran and Dac Tam Nguyen and Minh Duc Ngo and Long Doan and Ngoc Hoang Luong and Huynh Thi Thanh Binh
},
url = {https://link.springer.com/article/10.1007/s00521-025-11071-2},
year = {2025},
date = {2025-03-05},
urldate = {2025-03-05},
journal = {Neural Computing and Applications },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Ju, Moran; Niu, Buniu; Li, Mulin; Mao, Tengkai; Jin, Si-nian
Toward Better Accuracy-Efficiency Trade-Offs for Oriented SAR Ship Object Detection Bachelor Thesis
2025.
@bachelorthesis{ju-jstaeors,
title = {Toward Better Accuracy-Efficiency Trade-Offs for Oriented SAR Ship Object Detection},
author = {Moran Ju and Buniu Niu and Mulin Li and Tengkai Mao and Si-nian Jin},
url = {https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10944503},
year = {2025},
date = {2025-03-01},
urldate = {2025-03-01},
journal = { IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. },
keywords = {},
pubstate = {published},
tppubtype = {bachelorthesis}
}
Liu, Guoqing; Qian, Yuhua; Liang, Xinyan; Fu, Pinhan
Core structure-guided multi-modal classification via Monte Carlo Tree Search Journal Article
In: International Journal of Machine Learning and Cybernetics , 2025.
@article{liu-ijmlc25a,
title = {Core structure-guided multi-modal classification via Monte Carlo Tree Search},
author = { Guoqing Liu and Yuhua Qian and Xinyan Liang and Pinhan Fu
},
url = {https://link.springer.com/article/10.1007/s13042-025-02606-z},
year = {2025},
date = {2025-03-01},
urldate = {2025-03-01},
journal = {International Journal of Machine Learning and Cybernetics },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
HERNANDEZ, ESAU ALAIN HERVERT; CAO, YAN; KEHTARNAVAZ, NASSER
Computationally Efficient Neural Architecture Search for Image Denoising Bachelor Thesis
2025.
@bachelorthesis{nokey,
title = {Computationally Efficient Neural Architecture Search for Image Denoising},
author = {ESAU ALAIN HERVERT HERNANDEZ and YAN CAO and NASSER KEHTARNAVAZ},
url = {https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10948435},
year = {2025},
date = {2025-03-01},
urldate = {2025-03-01},
journal = {IEEE Access},
keywords = {},
pubstate = {published},
tppubtype = {bachelorthesis}
}
Naayini, Prudhvi; Kamatala, Srikanth; Myakala, Praveen Kumar
Transforming Performance Engineering with Generative AI Journal Article
In: Journal of Computer and Communications , vol. 13, no. 3, 2025.
@article{Naayini-jcc25a,
title = { Transforming Performance Engineering with Generative AI },
author = { Prudhvi Naayini and Srikanth Kamatala and Praveen Kumar Myakala},
url = {https://www.scirp.org/journal/paperinformation?paperid=141454},
year = {2025},
date = {2025-03-01},
urldate = {2025-03-01},
journal = {Journal of Computer and Communications },
volume = {13},
number = {3},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Xu, Dikai; Cao, Bin
Adaptive Multiobjective Evolutionary Generative Adversarial Network for Metaverse Network Intrusion Detection Journal Article
In: Science Partner Journals, 2025.
@article{nokey,
title = {Adaptive Multiobjective Evolutionary Generative Adversarial Network for Metaverse Network Intrusion Detection},
author = {Dikai Xu and Bin Cao},
url = {https://spj.science.org/doi/pdf/10.34133/research.0665},
year = {2025},
date = {2025-03-01},
urldate = {2025-03-01},
journal = {Science Partner Journals},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Li, Yuangang; Ma, Rui; Zhang, Qian; Wang, Zeyu; Zong, Linlin; Liu, Xinyue
Neural architecture search using attention enhanced precise path evaluation and efficient forward evolution Journal Article
In: scientific reports , 2025.
@article{nokey,
title = {Neural architecture search using attention enhanced precise path evaluation and efficient forward evolution},
author = {Yuangang Li and Rui Ma and Qian Zhang and Zeyu Wang and Linlin Zong and Xinyue Liu
},
url = {https://www.nature.com/articles/s41598-025-94187-8},
year = {2025},
date = {2025-03-01},
urldate = {2025-03-01},
booktitle = {scientific reports },
journal = {scientific reports },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
(Ed.)
HPE-DARTS: Hybrid Pruning and Proxy Evaluation in Differentiable Architecture Search Collection
2025.
@collection{lin-,
title = {HPE-DARTS: Hybrid Pruning and Proxy Evaluation in Differentiable Architecture Search},
author = {Hung-I Lin and Lin-Jing Kuo and Sheng-De Wang},
url = {https://www.scitepress.org/Papers/2025/131487/131487.pdf},
year = {2025},
date = {2025-03-01},
urldate = {2025-03-01},
booktitle = {Proceedings of the 17th International Conference on Agents and Artificial Intelligence (ICAART 2025)},
journal = {Proceedings of the 17th International Conference on Agents and Artificial Intelligence (ICAART 2025)},
keywords = {},
pubstate = {published},
tppubtype = {collection}
}
(Ed.)
2025.
@collection{Friderikos-dte25a,
title = {OPTIMIZED LSTM NEURAL NETWORKS VIA NEURAL ARCHITECTURE SEARCH FOR PREDICTING DAMAGE EVOLUTION IN COMPOSITE LAMINATES},
author = {O. Friderikos and A. Mendoza and Emmanuel Baranger and D. Sagris and C. David},
url = {https://congressarchive.cimne.com/dte_aicomas_2025/abstracts/b8d1d10a96b711efba01000c29ddfc0c.pdf},
year = {2025},
date = {2025-03-01},
urldate = {2025-03-01},
booktitle = {Digital Twins in Engineering & Artificial Intelligence and Computational Methods in Applied Science, DTE - AICOMAS 2025},
keywords = {},
pubstate = {published},
tppubtype = {collection}
}
Fang, Xuwei; Xie, Weisheng; Li, Hui; Zhou, Wenbin; Hang, Chen; Gao, Xiangxiang
DARTS-EAST: an edge-adaptive selection with topology first differentiable architecture selection method Journal Article
In: Applied Intelligence , 2025.
@article{fang-ai25a,
title = {DARTS-EAST: an edge-adaptive selection with topology first differentiable architecture selection method},
author = {Xuwei Fang and Weisheng Xie and Hui Li and Wenbin Zhou and Chen Hang and Xiangxiang Gao
},
url = {https://link.springer.com/article/10.1007/s10489-025-06353-0},
year = {2025},
date = {2025-03-01},
urldate = {2025-03-01},
journal = {Applied Intelligence },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
(Ed.)
Neural Architecture Search: Tradeoff Between Performance and Efficiency Collection
2025.
@collection{nokey,
title = {Neural Architecture Search: Tradeoff Between Performance and Efficiency},
author = {Tien Dung Nguyen and Nassim Mokhtari and Alexis Nédélec},
url = {https://www.scitepress.org/Papers/2025/132969/132969.pdf},
year = {2025},
date = {2025-03-01},
urldate = {2025-03-01},
booktitle = {Proceedings of the 17th International Conference on Agents and Artificial Intelligence (ICAART 2025)},
keywords = {},
pubstate = {published},
tppubtype = {collection}
}
(Ed.)
PQNAS: Mixed-precision Quantization-aware Neural Architecture Search with Pseudo Quantizer Collection
2025.
@collection{gao-icassp25a,
title = {PQNAS: Mixed-precision Quantization-aware Neural Architecture Search with Pseudo Quantizer},
author = {Tianxiao Gao and Li Guo and Shihao Wang and Shiai Zhu and Dajiang Zhou},
url = {https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10888233},
year = {2025},
date = {2025-03-01},
urldate = {2025-03-01},
booktitle = {2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
keywords = {},
pubstate = {published},
tppubtype = {collection}
}
He, Zhimin; Chen, Hongxiang; Zhou, Yan; Situ, Haozhen; Li, Yongyao; Li, Lvzhou
Self-supervised representation learning for Bayesian quantum architecture search Journal Article
In: Phys. Rev. A, vol. 111, iss. 3, pp. 032403, 2025.
@article{PhysRevA.111.032403,
title = {Self-supervised representation learning for Bayesian quantum architecture search},
author = {Zhimin He and Hongxiang Chen and Yan Zhou and Haozhen Situ and Yongyao Li and Lvzhou Li},
url = {https://link.aps.org/doi/10.1103/PhysRevA.111.032403},
doi = {10.1103/PhysRevA.111.032403},
year = {2025},
date = {2025-03-01},
urldate = {2025-03-01},
journal = {Phys. Rev. A},
volume = {111},
issue = {3},
pages = {032403},
publisher = {American Physical Society},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Feng, Shiyang; Li, Zhaowei; Zhang, Bo; Chen, Tao
DSF2-NAS: Dual-Stage Feature Fusion via Network Architecture Search for Classification of Multimodal Remote Sensing Images Journal Article
In: IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. , 2025.
@article{feng-ieeejstoaeors25a,
title = {DSF2-NAS: Dual-Stage Feature Fusion via Network Architecture Search for Classification of Multimodal Remote Sensing Images},
author = {Shiyang Feng and Zhaowei Li and Bo Zhang and Tao Chen},
url = {https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10904332},
year = {2025},
date = {2025-03-01},
urldate = {2025-03-01},
journal = { IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
(Ed.)
TinyDevID: TinyML-Driven IoT Devices IDentification Using Network Flow Data Collection
2025.
@collection{Rushikesh-csp25a,
title = {TinyDevID: TinyML-Driven IoT Devices IDentification Using Network Flow Data},
author = {Priyanka Rushikesh Chaudhary and Anand Agrawal and Rajib Ranjan Maiti},
url = {https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10885715},
year = {2025},
date = {2025-02-01},
urldate = {2025-02-01},
booktitle = {COMSNETS 2025 - Cybersecurity & Privacy Workshop (CSP)},
keywords = {},
pubstate = {published},
tppubtype = {collection}
}
Yu, Sixing
2025.
@phdthesis{yu-phd25a,
title = {Scalable and resource-efcient federated learning: Techniques for resource-constrained heterogeneous systems},
author = {Sixing Yu},
url = {https://www.proquest.com/docview/3165602177?pq-origsite=gscholar&fromopenview=true&sourcetype=Dissertations%20&%20Theses},
year = {2025},
date = {2025-02-01},
urldate = {2025-02-01},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
Fu, Jintao; Cong, Peng; Xu, Shuo; Chang, Jiahao; Liu, Ximing; Sun, Yuewen
Neural architecture search with Deep Radon Prior for sparse-view CT image reconstruction Journal Article
In: Med Phys , 2025.
@article{Fu-medphs25a,
title = { Neural architecture search with Deep Radon Prior for sparse-view CT image reconstruction },
author = {Jintao Fu and Peng Cong and Shuo Xu and Jiahao Chang and Ximing Liu and Yuewen Sun
},
url = {https://pubmed.ncbi.nlm.nih.gov/39930320/},
year = {2025},
date = {2025-02-01},
urldate = {2025-02-01},
journal = { Med Phys },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Zhao, Yi-Heng; Pang, Shen-Wen; Huang, Heng-Zhi; Wu, Shao-Wen; Sun, Shao-Hua; Liu, Zhen-Bing; Pan, Zhi-Chao
Automatic clustering of single-molecule break junction data through task-oriented representation learning Journal Article
In: Rare Metals , 2025.
@article{zhao-rarem25a,
title = {Automatic clustering of single-molecule break junction data through task-oriented representation learning},
author = {
Yi-Heng Zhao and Shen-Wen Pang and Heng-Zhi Huang and Shao-Wen Wu and Shao-Hua Sun and Zhen-Bing Liu and Zhi-Chao Pan
},
url = {https://link.springer.com/article/10.1007/s12598-024-03089-7},
year = {2025},
date = {2025-02-01},
urldate = {2025-02-01},
journal = { Rare Metals },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Huang, Tao
Efficient Deep Neural Architecture Design and Training PhD Thesis
2025.
@phdthesis{nokey,
title = {Efficient Deep Neural Architecture Design and Training},
author = { Huang, Tao },
url = {https://ses.library.usyd.edu.au/handle/2123/33598},
year = {2025},
date = {2025-02-01},
urldate = {2025-02-01},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
Herterich, Nils; Liu, Kai; Stein, Anthony
Multi-objective neural architecture search for real-time weed detection on embedded system Miscellaneous
2025.
@misc{Herterich,
title = {Multi-objective neural architecture search for real-time weed detection on embedded system},
author = {Nils Herterich and Kai Liu and Anthony Stein},
url = {https://dl.gi.de/server/api/core/bitstreams/29a49f8d-304e-4073-8a92-4bef6483c087/content},
year = {2025},
date = {2025-02-01},
keywords = {},
pubstate = {published},
tppubtype = {misc}
}
Tabak, Gabriel Couto; Molenaar, Dylan; Curi, Mariana
An evolutionary neural architecture search for item response theory autoencoders Journal Article
In: Behaviormetrika , 2025.
@article{nokey,
title = {An evolutionary neural architecture search for item response theory autoencoders},
author = {Gabriel Couto Tabak and Dylan Molenaar and Mariana Curi
},
url = {https://link.springer.com/article/10.1007/s41237-024-00250-5},
year = {2025},
date = {2025-01-27},
urldate = {2025-01-27},
journal = {Behaviormetrika },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Hao, Debei; Pei, Songwei
MIG-DARTS: towards effective differentiable architecture search by gradually mitigating the initial-channel gap between search and evaluation Journal Article
In: Neural Computing and Applications, 2025.
@article{nokey,
title = {MIG-DARTS: towards effective differentiable architecture search by gradually mitigating the initial-channel gap between search and evaluation},
author = {
Debei Hao and Songwei Pei
},
url = {https://link.springer.com/article/10.1007/s00521-024-10681-6},
year = {2025},
date = {2025-01-09},
urldate = {2025-01-09},
journal = {Neural Computing and Applications},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
(Ed.)
2025.
@collection{nokey,
title = {H4H: Hybrid Convolution-Transformer Architecture Search for NPU-CIM Heterogeneous Systems for AR/VR Applications},
author = {Yiwei Zhao and Jinhui Chen and Sai Qian Zhang and Syed Shakib Sarwar and Kleber Hugo Stangherlin and Jorge Tomas Gomez and Jae-Sun Seo and Barbara De Salvo and Chiao Liu and Phillip B. Gibbons and Ziyun Li},
url = {https://www.pdl.cmu.edu/PDL-FTP/associated/ASP-DAC2025-1073-12.pdf},
year = {2025},
date = {2025-01-02},
urldate = {2025-01-02},
booktitle = {ASPDAC ’25},
keywords = {},
pubstate = {published},
tppubtype = {collection}
}
Yoo, Eunjoung; Sim, Jaehyeong
ViT-Slim: A Genetic Algorithm-based NAS Framework for Efficient Vision Transformer Design Proceedings Article
In: 2025 IEEE Conference on Artificial Intelligence (CAI), pp. 796-802, 2025.
@inproceedings{11050569,
title = {ViT-Slim: A Genetic Algorithm-based NAS Framework for Efficient Vision Transformer Design},
author = {Eunjoung Yoo and Jaehyeong Sim},
url = {https://ieeexplore.ieee.org/abstract/document/11050569},
doi = {10.1109/CAI64502.2025.00142},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {2025 IEEE Conference on Artificial Intelligence (CAI)},
pages = {796-802},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Li, Yaochong; Zhang, Jing; Zhou, Rigui; Qu, Yi; Xu, Ruiqing
AQEA-QAS: An Adaptive Quantum Evolutionary Algorithm for Quantum Architecture Search Journal Article
In: Entropy, vol. 27, no. 7, 2025, ISSN: 1099-4300.
@article{e27070733,
title = {AQEA-QAS: An Adaptive Quantum Evolutionary Algorithm for Quantum Architecture Search},
author = {Yaochong Li and Jing Zhang and Rigui Zhou and Yi Qu and Ruiqing Xu},
url = {https://www.mdpi.com/1099-4300/27/7/733},
doi = {10.3390/e27070733},
issn = {1099-4300},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Entropy},
volume = {27},
number = {7},
abstract = {Quantum neural networks (QNNs) represent an emerging technology that uses a quantum computer for neural network computations. The QNNs have demonstrated potential advantages over classical neural networks in certain tasks. As a core component of a QNN, the parameterized quantum circuit (PQC) plays a crucial role in determining the QNN’s overall performance. However, quantum circuit architectures designed manually based on experience or using specific hardware structures can suffer from inefficiency due to the introduction of redundant quantum gates, which amplifies the impact of noise on system performance. Recent studies have suggested that the advantages of quantum evolutionary algorithms (QEAs) in terms of precision and convergence speed can provide an effective solution to quantum circuit architecture-related problems. Currently, most QEAs adopt a fixed rotation mode in the evolution process, and a lack of an adaptive updating mode can cause the QEAs to fall into a local optimum and make it difficult for them to converge. To address these problems, this study proposes an adaptive quantum evolution algorithm (AQEA). First, an adaptive mechanism is introduced to the evolution process, and the strategy of combining two dynamic rotation angles is adopted. Second, to prevent the fluctuations of the population’s offspring, the elite retention of the parents is used to ensure the inheritance of good genes. Finally, when the population falls into a local optimum, a quantum catastrophe mechanism is employed to break the current population state. The experimental results show that compared with the QNN structure based on manual design and QEA search, the proposed AQEA can reduce the number of network parameters by up to 20% and increase the accuracy by 7.21%. Moreover, in noisy environments, the AQEA-optimized circuit outperforms traditional circuits in maintaining high fidelity, and its excellent noise resistance provides strong support for the reliability of quantum computing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Yu, Yang; Qian, Hong; Hu, Yi-Qi
Calculation Operation Optimization: Competition Neural Architecture Search Book Chapter
In: Derivative-Free Optimization : Theoretical Foundations, Algorithms, and Applications, pp. 177–193, Springer Nature Singapore, Singapore, 2025, ISBN: 978-981-96-5929-6.
@inbook{Yu2025,
title = {Calculation Operation Optimization: Competition Neural Architecture Search},
author = {Yang Yu and Hong Qian and Yi-Qi Hu},
url = {https://doi.org/10.1007/978-981-96-5929-6_14},
doi = {10.1007/978-981-96-5929-6_14},
isbn = {978-981-96-5929-6},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {Derivative-Free Optimization : Theoretical Foundations, Algorithms, and Applications},
pages = {177–193},
publisher = {Springer Nature Singapore},
address = {Singapore},
abstract = {This chapter introduces Competition Neural Architecture Search (CNAS), a method for automatically designing neural network architectures. CNAS separates the search process into two parts: topological structure enumeration and calculation operation optimization. The topological structures are enumerated under depth and width constraints, while the calculation operations are optimized using derivative-free optimization (DFO) methods. A competition mechanism is employed to iteratively eliminate poorly performing structures, ensuring that the best architecture is selected. To improve efficiency, CNAS uses block-based search and experience reuse, leveraging historical data to warm-start the optimization process and simulate competitions. The chapter presents empirical results on image classification and denoising tasks, demonstrating that CNAS achieves competitive performance compared to manual designs and state-of-the-art NAS methods. The experiments highlight CNAS's ability to efficiently explore the architecture space and produce high-quality network designs.},
keywords = {},
pubstate = {published},
tppubtype = {inbook}
}
Chen, Chih-Ling; Wu, Kai-Chiang; Huang, Ning-Chi
Integrating Neural Architecture Search and Rematerialization for Efficient On-Device Learning Proceedings Article
In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 1226–1234, Association for Computing Machinery, NH Malaga Hotel, Malaga, Spain, 2025, ISBN: 9798400714658.
@inproceedings{10.1145/3712256.3726482,
title = {Integrating Neural Architecture Search and Rematerialization for Efficient On-Device Learning},
author = {Chih-Ling Chen and Kai-Chiang Wu and Ning-Chi Huang},
url = {https://doi.org/10.1145/3712256.3726482},
doi = {10.1145/3712256.3726482},
isbn = {9798400714658},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {Proceedings of the Genetic and Evolutionary Computation Conference},
pages = {1226–1234},
publisher = {Association for Computing Machinery},
address = {NH Malaga Hotel, Malaga, Spain},
series = {GECCO '25},
abstract = {Deep neural networks (DNNs) have notable performance in many fields, such as computer vision. Training a neural network on an edge device, commonly called on-device learning, has grown crucial for applications demanding real-time processing and enhanced privacy. However, existing on-device learning methods often face limitations, such as decreasing application accuracy, causing complexity in design and implementation, and increasing computational overhead, all of which hinder their effectiveness in reducing memory usage. In this paper, we address the issue by inspecting the memory usage of training a DNN, analyzing the effects of different on-device learning strategies, and introducing a framework that integrates neural architecture search (NAS) and rematerialization. The supernet of NAS can provide a population of compressed subnets/architectures to be trained without additional computational overhead, while rematerialization can mitigate memory consumption without accuracy loss. By leveraging the memory-saving effect of both supernet-based model compression and rematerialization, our proposed method can obtain suitable models that fit within the memory constraint while achieving a better trade-off between training time and model performance. In the experiments, we utilized complex datasets (CIFAR-100 and CUB-200) to fine-tune models on Raspberry Pi. The experimental results represent the effectiveness of our method in real-world on-device learning scenarios.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Huang, Junhao; Xue, Bing; Sun, Yanan; Zhang, Mengjie
Evolving Comprehensive Proxies for Zero-Shot Neural Architecture Search Book Chapter
In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 1246–1254, Association for Computing Machinery, New York, NY, USA, 2025, ISBN: 9798400714658.
@inbook{10.1145/3712256.3726315,
title = {Evolving Comprehensive Proxies for Zero-Shot Neural Architecture Search},
author = {Junhao Huang and Bing Xue and Yanan Sun and Mengjie Zhang},
url = {https://doi.org/10.1145/3712256.3726315},
isbn = {9798400714658},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {Proceedings of the Genetic and Evolutionary Computation Conference},
pages = {1246–1254},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
abstract = {Neural architecture search (NAS) has emerged as a promising technology for automatically designing deep neural network (DNN) architectures. However, its development is significantly constrained by the prohibitively high computational cost of architecture evaluations. Recently, zero-shot NAS has addressed this challenge by employing zero-cost proxies to evaluate candidate architectures without expensive gradient training, effectively mitigating the timeintensive nature of NAS. However, a major limitation is that most existing zero-cost proxies focus narrowly on a single aspect of DNNs, resulting in biased evaluations with generally weak correlations to the network's performance. In this work, we address this issue by assembling four distinct zero-cost proxies in a nonlinear fashion to provide a comprehensive evaluation of DNNs across multiple dimensions, including expressivity, convergence, generalization, and parameter saliency. Furthermore, we develop an adaptive particle swarm optimization-based approach to effectively evolve the coefficients of each base proxy in the ensemble for task-specific optimization. Extensive experimental results on various NAS benchmarks and open-domain search spaces demonstrate the effectiveness of the proposed method. Our findings show that the complementarity of zero-cost proxies greatly improves the reliability of performance evaluation, thereby enabling zero-shot NAS to identify more promising network architectures.},
keywords = {},
pubstate = {published},
tppubtype = {inbook}
}
Lu, Qingya; Wang, Zijie; Zhang, Xiaoman; Zhong, Yue
Mitigating Exploding Gradients in Large Language Models with Neural Architecture Search Proceedings Article
In: 2025 IEEE Conference on Artificial Intelligence (CAI), pp. 1139-1141, 2025.
@inproceedings{11050664,
title = {Mitigating Exploding Gradients in Large Language Models with Neural Architecture Search},
author = {Qingya Lu and Zijie Wang and Xiaoman Zhang and Yue Zhong},
url = {https://ieeexplore.ieee.org/document/11050664},
doi = {10.1109/CAI64502.2025.00197},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {2025 IEEE Conference on Artificial Intelligence (CAI)},
pages = {1139-1141},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Wang, Maolin; Wei, Tianshuo; Zhang, Sheng; Guo, Ruocheng; Wang, Wanyu; Ye, Shanshan; Zou, Lixin; Wei, Xuetao; Zhao, Xiangyu
DANCE: Resource-Efficient Neural Architecture Search with Data-Aware and Continuous Adaptation Technical Report
2025.
@techreport{wang2025danceresourceefficientneuralarchitecture,
title = {DANCE: Resource-Efficient Neural Architecture Search with Data-Aware and Continuous Adaptation},
author = {Maolin Wang and Tianshuo Wei and Sheng Zhang and Ruocheng Guo and Wanyu Wang and Shanshan Ye and Lixin Zou and Xuetao Wei and Xiangyu Zhao},
url = {https://arxiv.org/abs/2507.04671},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Hamdi, Ahmed; Noura, Hassan; Azar, Joseph; Pujolle, Guy
Frugal Object Detection Models: Solutions, Challenges and Future Directions Proceedings Article
In: 2025 International Wireless Communications and Mobile Computing (IWCMC), pp. 1694-1701, 2025.
@inproceedings{11059526,
title = {Frugal Object Detection Models: Solutions, Challenges and Future Directions},
author = {Ahmed Hamdi and Hassan Noura and Joseph Azar and Guy Pujolle},
doi = {10.1109/IWCMC65282.2025.11059526},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {2025 International Wireless Communications and Mobile Computing (IWCMC)},
pages = {1694-1701},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Princess, P. Joyce Beryl; Silas, Salaja; Rajsingh, Elijah Blessing; Gao, Xiao-Zhi
Human posture recognition using random search neural architecture for accident injury severity prediction and victim identification Journal Article
In: Image and Vision Computing, vol. 161, pp. 105654, 2025, ISSN: 0262-8856.
@article{PRINCESS2025105654,
title = {Human posture recognition using random search neural architecture for accident injury severity prediction and victim identification},
author = {P. Joyce Beryl Princess and Salaja Silas and Elijah Blessing Rajsingh and Xiao-Zhi Gao},
url = {https://www.sciencedirect.com/science/article/pii/S0262885625002422},
doi = {https://doi.org/10.1016/j.imavis.2025.105654},
issn = {0262-8856},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Image and Vision Computing},
volume = {161},
pages = {105654},
abstract = {Numerous lives are lost due to the ignorance of the victim's conditions and the injury severity in road accidents. In developing countries like India, the challenges of emergency responders are identifying and prioritizing victims and their injuries amid chaotic environments. In this paper, a systematic approach for accident injury severity and victim identification using human posture recognition and instance segmentation is proposed. To overcome the challenge of fixed architectures limiting the adaptability to diverse accident scenarios, Random Search Neural Architecture Search (RNAS) is adapted to automatically find an optimal Convolutional Neural Network (CNN). To enhance the efficiency and accuracy of identifying victims in accident scenes, Mask RCNN, an instance segmentation technique trained over accident images, has been used. By leveraging computer vision techniques, an automated accident injury severity and victim identification system facilitating more timely decision-making for the emergency response systems is designed. The model has been experimented with and evaluated, and the introduction of random search neural architecture has determined a computationally less expensive CNN model. The model can produce an accuracy of 95% in recognizing the human posture. Further, Mask RCNN is trained, experimented with, and validated on accident images to produce 0.99 mAP in identifying victims.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Sasidevi, J.; Sathish, A.; Vatchala, S.; Nallusamy, M.
Nasnet with African vulture optimization for detecting diabetic retinopathy stages in retinal fundus images Journal Article
In: Expert Systems with Applications, pp. 128910, 2025, ISSN: 0957-4174.
@article{SASIDEVI2025128910,
title = {Nasnet with African vulture optimization for detecting diabetic retinopathy stages in retinal fundus images},
author = {J. Sasidevi and A. Sathish and S. Vatchala and M. Nallusamy},
url = {https://www.sciencedirect.com/science/article/pii/S0957417425025278},
doi = {https://doi.org/10.1016/j.eswa.2025.128910},
issn = {0957-4174},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Expert Systems with Applications},
pages = {128910},
abstract = {Diabetes retinopathy is a disorder that damages the retinal blood vessels and leads to severe vision problems. Even for experts, it can be difficult to recognize a disease in its initial stage and its diagnosis seems to be time-consuming. It is difficult to increase accuracy and decrease time complexity using conventional detection techniques like Inception v3 and Xception. Neural Architecture Search Network was proposed as a solution to these issues in order to identify the exact stage of diabetic retinopathy. Initially, input retinal images are gathered, and then preprocessed the input images by utilizing anisotropic diffusion, CLAHE (Contrast Limited Adaptive Histogram Equalization) as well as high boost filtering. Noise from the input image is removed using an anisotropic diffusion technique. Utilizing CLAHE, the image’s brightness is increased, and using high boost filtering, the image’s edges are sharpened. Following that, the preprocessed image is segmented by using a No New U-Net (NNU-Net). NNU-Net was used to predict surface mesh distances, enhancing segmentation performance through extensive data augmentation techniques. Finally, the Neural Architecture Search Network (NASNet) classifier is utilized on the segmented image to predict the stages of diabetic retinopathy. African vulture optimization is used to optimally select the hyperparameter such as batch size that is used to enhance the accuracy of the classifier. An analysis of the proposed method’s simulation results indicates that it achieves 97.4% accuracy, 2.56 error, 90.4% precision, as well as 98.4% specificity. Consequently, Compared to other existing techniques, the proposed methodology performs better. This prediction approach enhances the quality of life for patients by predicting diabetic retinopathy disease at an early stage.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Fu, Zhiyang; Xu, Yongpeng; Liang, Hanqing; Yan, Ye; Gao, Penglu; Jiang, Xiuchen
HKT-FNAS: Search Robust Neural Architecture via Heterogeneous Knowledge Transfer for Defect Detection of PV Energy Storage Modules Journal Article
In: Tsinghua Science and Technology, 2025.
@article{Fu2025,
title = {HKT-FNAS: Search Robust Neural Architecture via Heterogeneous Knowledge Transfer for Defect Detection of PV Energy Storage Modules},
author = {Zhiyang Fu and Yongpeng Xu and Hanqing Liang and Ye Yan and Penglu Gao and Xiuchen Jiang},
url = {https://www.sciopen.com/article/10.26599/TST.2025.9010113},
doi = {10.26599/TST.2025.9010113},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Tsinghua Science and Technology},
abstract = {Defect detection has become a key task in intelligent manufacturing of photovoltaic (PV) energy storage modules. Nevertheless, noise and unpredictable uncertainties, which adversely affects the effectiveness of Convolutional Neural Networks (CNNs), making it difficult to determine the optimal network architecture and thereby hindering industrial applicability. To overcome the aforementioned challenge, we propose a progressive fuzzy neural architecture search framework via heterogeneous knowledge transfer (HKT-FNAS), aiming to search for efficient fuzzy CNN models with fuzzy processing ability. First, we propose a fuzzy CNN search space and an architectural representation strategy, which integrate a series of fuzzy operations (e.g., fuzzy convolution, and fuzzy pooling) into CNN. Then, we develop a progressive evolutionary search framework, which utilizes the knowledge transfer strategy to assist the multi-stage search. Especially, the architectural insights learned from a smaller search space are utilized to guide the exploration of the search over a larger space. Next, we devise a predictor-based evolutionary search strategy, which learns an online predictor to directly evaluate the candidate architectures for efficient evolution. We carry out a series of comparative experiments on widely used fuzzy benchmark datasets. Results validate the effectiveness and efficiency of the HKT-FNAS framework, achieving the state-of-the-art performance compared to counterparts.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}