Maintained by Difan Deng and Marius Lindauer.
The following list considers papers related to neural architecture search. It is by no means complete. If you miss a paper on the list, please let us know.
Please note that although NAS methods steadily improve, the quality of empirical evaluations in this field are still lagging behind compared to other areas in machine learning, AI and optimization. We would therefore like to share some best practices for empirical evaluations of NAS methods, which we believe will facilitate sustained and measurable progress in the field. If you are interested in a teaser, please read our blog post or directly jump to our checklist.
Transformers have gained increasing popularity in different domains. For a comprehensive list of papers focusing on Neural Architecture Search for Transformer-Based spaces, the awesome-transformer-search repo is all you need.
2022
Wen, Long; Gao, Liang; Li, Xinyu; Li, Hui
A new genetic algorithm based evolutionary neural architecture search for image classification Journal Article
In: Swarm and Evolutionary Computation, vol. 75, pp. 101191, 2022, ISSN: 2210-6502.
@article{WEN2022101191,
title = {A new genetic algorithm based evolutionary neural architecture search for image classification},
author = {Long Wen and Liang Gao and Xinyu Li and Hui Li},
url = {https://www.sciencedirect.com/science/article/pii/S2210650222001547},
doi = {https://doi.org/10.1016/j.swevo.2022.101191},
issn = {2210-6502},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {Swarm and Evolutionary Computation},
volume = {75},
pages = {101191},
abstract = {Deep Learning (DL) has achieved the great breakthrough in image classification. As DL structure is problem-dependent and it has the crucial impact on its performance, it is still necessary to re-design the structures of DL according to the actual needs, even there exists various benchmark DL structures. Neural Architecture Search (NAS) which can design the DL network automatically has been widely investigated. However, many NAS methods suffer from the huge computation time. To overcome this drawback, this research proposed a new Evolutionary Neural Architecture Search with RepVGG nodes (EvoNAS-Rep). Firstly, a new encoding strategy is developed, which can map the fixed-length encoding individual to DL structure with variable depth using RepVGG nodes. Secondly, Genetic Algorithm (GA) is adopted for searching the optimal individual and its corresponding DL model. Thirdly, the iterative training process is designed to train the DL model and to evolve the GA simultaneously. The proposed EvoNAS-Rep is validated on the famous CIFAR 10 and CIFAR 100. The results show that EvoNAS-Rep has obtained 96.35% and 79.82% with only near 0.2 GPU days, which is both effectiveness and efficiency. EvoNAS-Rep is also validated on two real-world applications, including the NEU-CLS and the Chest XRay 2017 datasets, and the results show that EvoNAS-Rep has achieved the competitive results.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Li, Yifan; Liu, Jing; Teng, Yingzhi
A decomposition-based memetic neural architecture search algorithm for univariate time series forecasting Journal Article
In: Applied Soft Computing, vol. 130, pp. 109714, 2022, ISSN: 1568-4946.
@article{LI2022109714,
title = {A decomposition-based memetic neural architecture search algorithm for univariate time series forecasting},
author = {Yifan Li and Jing Liu and Yingzhi Teng},
url = {https://www.sciencedirect.com/science/article/pii/S1568494622007633},
doi = {https://doi.org/10.1016/j.asoc.2022.109714},
issn = {1568-4946},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {Applied Soft Computing},
volume = {130},
pages = {109714},
abstract = {Although deep learning has made remarkable progress in time series forecasting, enormous hyperparameters consume a lot of effort to tune. Moreover, to further build the forecasting models with better performance, time series decomposition is usually adopted to mine implicit patterns of the data. Inspired by the time series decomposition, automatically searching for a network architecture after decomposing the time series is proposed. The searching process is non-trivial and has two key challenges: 1) impairment of time series information after decomposing and 2) enlarged search space caused by the huge parameters to be optimized. In this paper, a decomposition-based memetic neural architecture search algorithm is proposed for univariate time series forecasting to address these two challenges. For the first challenge, a general univariate time series forecasting paradigm is designed as the building pipeline of the individual in the proposed algorithm, which considers both the decomposed components and the original series as the compensation information to improve the network representation ability. For the second challenge, with the intrinsic property of representation of individuals in mind, we design a decomposition-based memetic algorithm with a discriminative local search operator to automatically optimize the network configurations. The experimental results on nine benchmarks with four horizons and one application of remaining useful forecasting demonstrate that the discovered architectures by the proposed algorithm achieve competitive performance compared with six methods under aligned settings. Codes and models will be released in https://github.com/EavanLi/dMA-NAS-UTSF.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Yang, Fan; Li, Xin; Shen, Jianbing
Nested Architecture Search for Point Cloud Semantic Segmentation Journal Article
In: IEEE Transactions on Image Processing, pp. 1-1, 2022.
@article{9919408,
title = {Nested Architecture Search for Point Cloud Semantic Segmentation},
author = {Fan Yang and Xin Li and Jianbing Shen},
url = {https://ieeexplore.ieee.org/abstract/document/9919408},
doi = {10.1109/TIP.2022.3147983},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {IEEE Transactions on Image Processing},
pages = {1-1},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Shu, Yao; Chen, Yizhou; Dai, Zhongxiang; Low, Bryan Kian Hsiang
Neural ensemble search via Bayesian sampling Proceedings Article
In: Cussens, James; Zhang, Kun (Ed.): Uncertainty in Artificial Intelligence, Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence, UAI 2022, 1-5 August 2022, Eindhoven, The Netherlands, pp. 1803–1812, PMLR, 2022.
@inproceedings{DBLP:conf/uai/ShuCDL22,
title = {Neural ensemble search via Bayesian sampling},
author = {Yao Shu and Yizhou Chen and Zhongxiang Dai and Bryan Kian Hsiang Low},
editor = {James Cussens and Kun Zhang},
url = {https://proceedings.mlr.press/v180/shu22a.html},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
booktitle = {Uncertainty in Artificial Intelligence, Proceedings of the Thirty-Eighth
Conference on Uncertainty in Artificial Intelligence, UAI 2022,
1-5 August 2022, Eindhoven, The Netherlands},
volume = {180},
pages = {1803--1812},
publisher = {PMLR},
series = {Proceedings of Machine Learning Research},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Jawahar, Ganesh; Mukherjee, Subhabrata; Liu, Xiaodong; Kim, Young Jin; Abdul-Mageed, Muhammad; Lakshmanan, Laks V. S.; Awadallah, Ahmed Hassan; Bubeck, Sébastien; Gao, Jianfeng
AutoMoE: Neural Architecture Search for Efficient Sparsely Activated Transformers Journal Article
In: CoRR, vol. abs/2210.07535, 2022.
@article{DBLP:journals/corr/abs-2210-07535,
title = {AutoMoE: Neural Architecture Search for Efficient Sparsely Activated Transformers},
author = {Ganesh Jawahar and Subhabrata Mukherjee and Xiaodong Liu and Young Jin Kim and Muhammad Abdul-Mageed and Laks V. S. Lakshmanan and Ahmed Hassan Awadallah and Sébastien Bubeck and Jianfeng Gao},
url = {https://doi.org/10.48550/arXiv.2210.07535},
doi = {10.48550/arXiv.2210.07535},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {CoRR},
volume = {abs/2210.07535},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Chen, Hongjiang; Wang, Yang; Liu, Leibo; Wei, Shaojun; Yin, Shouyi
HQNAS: Auto CNN deployment framework for joint quantization and architecture search Journal Article
In: CoRR, vol. abs/2210.08485, 2022.
@article{DBLP:journals/corr/abs-2210-08485,
title = {HQNAS: Auto CNN deployment framework for joint quantization and architecture search},
author = {Hongjiang Chen and Yang Wang and Leibo Liu and Shaojun Wei and Shouyi Yin},
url = {https://doi.org/10.48550/arXiv.2210.08485},
doi = {10.48550/arXiv.2210.08485},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {CoRR},
volume = {abs/2210.08485},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Guo, Yong; Chen, Yaofo; Zheng, Yin; Chen, Qi; Zhao, Peilin; Chen, Jian; Huang, Junzhou; Tan, Mingkui
Pareto-aware Neural Architecture Generation for Diverse Computational Budgets Technical Report
2022.
@techreport{DBLP:journals/corr/abs-2210-07634,
title = {Pareto-aware Neural Architecture Generation for Diverse Computational Budgets},
author = {Yong Guo and Yaofo Chen and Yin Zheng and Qi Chen and Peilin Zhao and Jian Chen and Junzhou Huang and Mingkui Tan},
url = {https://doi.org/10.48550/arXiv.2210.07634},
doi = {10.48550/arXiv.2210.07634},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {CoRR},
volume = {abs/2210.07634},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Chen, Hongjiang; Wang, Yang; Liu, Leibo; Wei, Shaojun; Yin, Shouyi
FAQS: Communication-efficient Federate DNN Architecture and Quantization Co-Search for personalized Hardware-aware Preferences Technical Report
2022.
@techreport{DBLP:journals/corr/abs-2210-08450,
title = {FAQS: Communication-efficient Federate DNN Architecture and Quantization Co-Search for personalized Hardware-aware Preferences},
author = {Hongjiang Chen and Yang Wang and Leibo Liu and Shaojun Wei and Shouyi Yin},
url = {https://doi.org/10.48550/arXiv.2210.08450},
doi = {10.48550/arXiv.2210.08450},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {CoRR},
volume = {abs/2210.08450},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Li, Fuxian; Yan, Huan; Jin, Guangyin; Liu, Yue; Li, Yong; Jin, Depeng
Automated Spatio-Temporal Synchronous Modeling with Multiple Graphs for Traffic Prediction Technical Report
2022.
@techreport{DBLP:conf/cikm/LiYJLLJ22,
title = {Automated Spatio-Temporal Synchronous Modeling with Multiple Graphs for Traffic Prediction},
author = {Fuxian Li and Huan Yan and Guangyin Jin and Yue Liu and Yong Li and Depeng Jin},
editor = {Mohammad Al Hasan and Li Xiong},
url = {https://doi.org/10.1145/3511808.3557243},
doi = {10.1145/3511808.3557243},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
booktitle = {Proceedings of the 31st ACM International Conference on Information
& Knowledge Management, Atlanta, GA, USA, October 17-21, 2022},
pages = {1084--1093},
publisher = {ACM},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Mao, Wei; Dai, Liuyao; Li, Kai; Cheng, Quan; Wang, Yuhang; Du, Laimin; Luo, Shaobo; Huang, Mingqiang; Yu, Hao
An Energy-Efficient Mixed-Bitwidth Systolic Accelerator for NAS-Optimized Deep Neural Networks Journal Article
In: IEEE Transactions on Very Large Scale Integration (VLSI) Systems, pp. 1-13, 2022.
@article{9920733,
title = {An Energy-Efficient Mixed-Bitwidth Systolic Accelerator for NAS-Optimized Deep Neural Networks},
author = {Wei Mao and Liuyao Dai and Kai Li and Quan Cheng and Yuhang Wang and Laimin Du and Shaobo Luo and Mingqiang Huang and Hao Yu},
url = {https://ieeexplore.ieee.org/abstract/document/9920733},
doi = {10.1109/TVLSI.2022.3210069},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {IEEE Transactions on Very Large Scale Integration (VLSI) Systems},
pages = {1-13},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Movahedi, Sajad; Adabinejad, Melika; Imani, Ayyoob; Keshavarz, Arezou; Dehghani, Mostafa; Shakery, Azadeh; Araabi, Babak Nadjar
(Łambda)-DARTS: Mitigating Performance Collapse by Harmonizing Operation Selection among Cells Technical Report
2022.
@techreport{DBLP:journals/corr/abs-2210-07998,
title = {(Łambda)-DARTS: Mitigating Performance Collapse by Harmonizing Operation Selection among Cells},
author = {Sajad Movahedi and Melika Adabinejad and Ayyoob Imani and Arezou Keshavarz and Mostafa Dehghani and Azadeh Shakery and Babak Nadjar Araabi},
url = {https://doi.org/10.48550/arXiv.2210.07998},
doi = {10.48550/arXiv.2210.07998},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {CoRR},
volume = {abs/2210.07998},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Hoang, Duc; Wang, Haotao; Zhao, Handong; Rossi, Ryan; Kim, Sungchul; Mahadik, Kanak; Wang, Zhangyang
AutoMARS: Searching to Compress Multi-Modality Recommendation Systems Proceedings Article
In: Proceedings of the 31st ACM International Conference on Information & Knowledge Management, pp. 727–736, Association for Computing Machinery, Atlanta, GA, USA, 2022, ISBN: 9781450392365.
@inproceedings{10.1145/3511808.3557242,
title = {AutoMARS: Searching to Compress Multi-Modality Recommendation Systems},
author = {Duc Hoang and Haotao Wang and Handong Zhao and Ryan Rossi and Sungchul Kim and Kanak Mahadik and Zhangyang Wang},
url = {https://doi.org/10.1145/3511808.3557242},
doi = {10.1145/3511808.3557242},
isbn = {9781450392365},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
booktitle = {Proceedings of the 31st ACM International Conference on Information & Knowledge Management},
pages = {727–736},
publisher = {Association for Computing Machinery},
address = {Atlanta, GA, USA},
series = {CIKM '22},
abstract = {Web applications utilize Recommendation Systems (RS) to address the problem of consumer over-choices. Recent works have taken advantage of multi-modality or multi-view, input information (such as user interaction, images, texts, rating scores) to boost recommendation system performance compared with using single-modality information. However, the use of multi-modality input demands much higher computational cost and storage capacity. On the other hand, the real-world RS services usually have strict budgets on both time and space for a good customer experience. As a result, the model efficiency of multi-modality recommendation systems has gained increasing importance. While unfortunately, to the best of our knowledge, there is no existing study of a generic compression framework for multi-modality RS. In this paper, we investigate, for the first time, how to compress a multi-modality recommendation system with a fixed budget. Assuming that input information from different modalities are of unequal importance, a good compression algorithm should learn to automatically allocate different resource budgets to each input, based on their importance in maximally preserving recommendation efficacy. To this end, we leverage the tools of neural architecture search (NAS) and distillation and propose Auto Multi-modAlity Recommendation System (AutoMARS), a unified modality-aware model compression framework dedicated to multi-modality recommendation systems. We demonstrate the effectiveness and generality of AutoMARS by testing it on three different Amazon datasets of various sparsity. AutoMARS demonstrates superior multi-modality compression performance than previous state-of-the-art compression methods. For example on the Amazon Beauty dataset, we achieve on average a 20% higher accuracy over previous state-of-the-art methods, while enjoying 65% reduction over baselines. Codes are available at: https://github.com/VITA-Group/AutoMARS.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Hoang, Duc; Wang, Haotao; Zhao, Handong; Rossi, Ryan; Kim, Sungchul; Mahadik, Kanak; Wang, Zhangyang
AutoMARS: Searching to Compress Multi-Modality Recommendation Systems Proceedings Article
In: Proceedings of the 31st ACM International Conference on Information & Knowledge Management, pp. 727–736, Association for Computing Machinery, Atlanta, GA, USA, 2022, ISBN: 9781450392365.
@inproceedings{10.1145/3511808.3557242b,
title = {AutoMARS: Searching to Compress Multi-Modality Recommendation Systems},
author = {Duc Hoang and Haotao Wang and Handong Zhao and Ryan Rossi and Sungchul Kim and Kanak Mahadik and Zhangyang Wang},
url = {https://doi.org/10.1145/3511808.3557242},
doi = {10.1145/3511808.3557242},
isbn = {9781450392365},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
booktitle = {Proceedings of the 31st ACM International Conference on Information & Knowledge Management},
pages = {727–736},
publisher = {Association for Computing Machinery},
address = {Atlanta, GA, USA},
series = {CIKM '22},
abstract = {Web applications utilize Recommendation Systems (RS) to address the problem of consumer over-choices. Recent works have taken advantage of multi-modality or multi-view, input information (such as user interaction, images, texts, rating scores) to boost recommendation system performance compared with using single-modality information. However, the use of multi-modality input demands much higher computational cost and storage capacity. On the other hand, the real-world RS services usually have strict budgets on both time and space for a good customer experience. As a result, the model efficiency of multi-modality recommendation systems has gained increasing importance. While unfortunately, to the best of our knowledge, there is no existing study of a generic compression framework for multi-modality RS. In this paper, we investigate, for the first time, how to compress a multi-modality recommendation system with a fixed budget. Assuming that input information from different modalities are of unequal importance, a good compression algorithm should learn to automatically allocate different resource budgets to each input, based on their importance in maximally preserving recommendation efficacy. To this end, we leverage the tools of neural architecture search (NAS) and distillation and propose Auto Multi-modAlity Recommendation System (AutoMARS), a unified modality-aware model compression framework dedicated to multi-modality recommendation systems. We demonstrate the effectiveness and generality of AutoMARS by testing it on three different Amazon datasets of various sparsity. AutoMARS demonstrates superior multi-modality compression performance than previous state-of-the-art compression methods. For example on the Amazon Beauty dataset, we achieve on average a 20% higher accuracy over previous state-of-the-art methods, while enjoying 65% reduction over baselines. Codes are available at: https://github.com/VITA-Group/AutoMARS.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Wang, Zhaozhi; Su, Kefan; Zhang, Jian; Jia, Huizhu; Ye, Qixiang; Xie, Xiaodong; Lu, Zongqing
Multi-Agent Automated Machine Learning Technical Report
2022.
@techreport{DBLP:journals/corr/abs-2210-09084,
title = {Multi-Agent Automated Machine Learning},
author = {Zhaozhi Wang and Kefan Su and Jian Zhang and Huizhu Jia and Qixiang Ye and Xiaodong Xie and Zongqing Lu},
url = {https://doi.org/10.48550/arXiv.2210.09084},
doi = {10.48550/arXiv.2210.09084},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {CoRR},
volume = {abs/2210.09084},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Sun, Jia; Li, Yanfeng; Chen, Houjin; Peng, Yahui
A Person Re-Identification Baseline Based on Attention Block Neural Architecture Search Proceedings Article
In: 2022 IEEE International Conference on Image Processing (ICIP), pp. 841-845, 2022.
@inproceedings{9897906,
title = {A Person Re-Identification Baseline Based on Attention Block Neural Architecture Search},
author = {Jia Sun and Yanfeng Li and Houjin Chen and Yahui Peng},
url = {https://ieeexplore.ieee.org/abstract/document/9897906},
doi = {10.1109/ICIP46576.2022.9897906},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
booktitle = {2022 IEEE International Conference on Image Processing (ICIP)},
pages = {841-845},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Chau, Thomas Chun-Pong; Dudziak, Lukasz; Wen, Hongkai; Lane, Nicholas Donald; Abdelfattah, Mohamed S.
BLOX: Macro Neural Architecture Search Benchmark and Algorithms Technical Report
2022.
@techreport{DBLP:journals/corr/abs-2210-07271,
title = {BLOX: Macro Neural Architecture Search Benchmark and Algorithms},
author = {Thomas Chun-Pong Chau and Lukasz Dudziak and Hongkai Wen and Nicholas Donald Lane and Mohamed S. Abdelfattah},
url = {https://doi.org/10.48550/arXiv.2210.07271},
doi = {10.48550/arXiv.2210.07271},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {CoRR},
volume = {abs/2210.07271},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Cai, He; Zhang, Zhaokai; Feng, Tianpeng; Guo, Yandong
DARTS-PD: Differentiable Architecture Search with Path-Wise Weight Sharing Derivation Proceedings Article
In: 2022 IEEE International Conference on Image Processing (ICIP), pp. 1256-1260, 2022.
@inproceedings{9897275,
title = {DARTS-PD: Differentiable Architecture Search with Path-Wise Weight Sharing Derivation},
author = {He Cai and Zhaokai Zhang and Tianpeng Feng and Yandong Guo},
url = {https://ieeexplore.ieee.org/abstract/document/9897275},
doi = {10.1109/ICIP46576.2022.9897275},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
booktitle = {2022 IEEE International Conference on Image Processing (ICIP)},
pages = {1256-1260},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Sukthanker, Rhea; Dooley, Samuel; Dickerson, John P.; White, Colin; Hutter, Frank; Goldblum, Micah
On the Importance of Architectures and Hyperparameters for Fairness in Face Recognition Technical Report
2022.
@techreport{DBLP:journals/corr/abs-2210-09943,
title = {On the Importance of Architectures and Hyperparameters for Fairness in Face Recognition},
author = {Rhea Sukthanker and Samuel Dooley and John P. Dickerson and Colin White and Frank Hutter and Micah Goldblum},
url = {https://doi.org/10.48550/arXiv.2210.09943},
doi = {10.48550/arXiv.2210.09943},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {CoRR},
volume = {abs/2210.09943},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Li, Yuhong; Li, Jiajie; Han, Cong; Li, Pan; Xiong, Jinjun; Chen, Deming
Extensible Proxy for Efficient NAS Technical Report
2022.
@techreport{DBLP:journals/corr/abs-2210-09459,
title = {Extensible Proxy for Efficient NAS},
author = {Yuhong Li and Jiajie Li and Cong Han and Pan Li and Jinjun Xiong and Deming Chen},
url = {https://doi.org/10.48550/arXiv.2210.09459},
doi = {10.48550/arXiv.2210.09459},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {CoRR},
volume = {abs/2210.09459},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Shi, Huihong; You, Haoran; Zhao, Yang; Wang, Zhongfeng; Lin, Yingyan
NASA: Neural Architecture Search and Acceleration for Hardware Inspired Hybrid Networks Technical Report
2022.
@techreport{DBLP:journals/corr/abs-2210-13361,
title = {NASA: Neural Architecture Search and Acceleration for Hardware Inspired Hybrid Networks},
author = {Huihong Shi and Haoran You and Yang Zhao and Zhongfeng Wang and Yingyan Lin},
url = {https://doi.org/10.48550/arXiv.2210.13361},
doi = {10.48550/arXiv.2210.13361},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {CoRR},
volume = {abs/2210.13361},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Bober-Irizar, Mikel; Shumailov, Ilia; Zhao, Yiren; Mullins, Robert; Papernot, Nicolas
Architectural Backdoors in Neural Networks Technical Report
2022.
@techreport{DBLP:journals/corr/abs-2206-07840,
title = {Architectural Backdoors in Neural Networks},
author = {Mikel Bober-Irizar and Ilia Shumailov and Yiren Zhao and Robert Mullins and Nicolas Papernot},
url = {https://doi.org/10.48550/arXiv.2206.07840},
doi = {10.48550/arXiv.2206.07840},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {CoRR},
volume = {abs/2206.07840},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Koh, Edwin J. Y.; Amini, Eiman; Gaur, Shruti; Maquieira, Miguel Becerra; Heck, Christian Jara; McLachlan, Geoffrey J.; Beaton, Nick
In: Minerals Engineering, vol. 189, pp. 107886, 2022, ISSN: 0892-6875.
@article{KOH2022107886,
title = {An Automated Machine learning (AutoML) approach to regression models in minerals processing with case studies of developing industrial comminution and flotation models},
author = {Edwin J. Y. Koh and Eiman Amini and Shruti Gaur and Miguel Becerra Maquieira and Christian Jara Heck and Geoffrey J. McLachlan and Nick Beaton},
url = {https://www.sciencedirect.com/science/article/pii/S0892687522004964},
doi = {https://doi.org/10.1016/j.mineng.2022.107886},
issn = {0892-6875},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {Minerals Engineering},
volume = {189},
pages = {107886},
abstract = {Deep learning (DL), a subset of machine learning (ML) has been a popular research interest after obtaining remarkable achievements on various tasks like image classification, object detection, language processing, and artificial intelligence. However, the successes of these algorithms were highly dependent on human expertise for hyperparameter optimisation and data preparation. As a result, widespread application of DL systems in minerals processing is still absent despite the increasing ability to collect data from process information (PI) and assay data. Automated Machine Learning (AutoML) is an emerging area of research which aims to automate the development of ready-to-use end-to-end ML models with little to no user ML knowledge. However, existing commercially available AutoML algorithms are not well designed for minerals processing data. In this study, we develop an AutoML algorithm to develop steady-state minerals processing models suitable for mine scheduling and process optimisation. The algorithm consists of data filtering, temporal resolution alignment, feature selection, neural network architecture search, and development. The AutoML algorithm was tested on three case studies of different processes and ore types. These case studies cover the range of difficulties of possible datasets encountered in the mining and processing industry from clean simulated data to noisy data with poor correlation. The algorithm successfully developed neural network models within hours from hourly raw PI and/or daily assay data with no human intervention. These models derived from process data have minimal errors as low as < 3 % for major valuables like Ni and Cu, 6–7 % for by-products like Au, 8–10 % for deleterious minerals like MgO, and 5–8 % for gangue. The algorithm was also designed so that expert minerals processing knowledge can influence the pipeline to improve the quality of models. As a result, the AutoML algorithm becomes a powerful tool for mining and mineral processing experts to apply their domain knowledge of the process to develop models of equipment or processing circuits.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Bitzer, Matthias; Meister, Mona; Zimmer, Christoph
Structural Kernel Search via Bayesian Optimization and Symbolical Optimal Transport Technical Report
2022.
@techreport{DBLP:journals/corr/abs-2210-11836,
title = {Structural Kernel Search via Bayesian Optimization and Symbolical Optimal Transport},
author = {Matthias Bitzer and Mona Meister and Christoph Zimmer},
url = {https://doi.org/10.48550/arXiv.2210.11836},
doi = {10.48550/arXiv.2210.11836},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {CoRR},
volume = {abs/2210.11836},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Pujari, Keerthi Nagasree; Miriyala, Srinivas Soumitri; Mittal, Prateek; Mitra, Kishalay
Better Wind forecasting using Evolutionary Neural Architecture Search driven Green Deep Learning Journal Article
In: Expert Systems with Applications, pp. 119063, 2022, ISSN: 0957-4174.
@article{NAGASREEPUJARI2022119063,
title = {Better Wind forecasting using Evolutionary Neural Architecture Search driven Green Deep Learning},
author = {Keerthi Nagasree Pujari and Srinivas Soumitri Miriyala and Prateek Mittal and Kishalay Mitra},
url = {https://www.sciencedirect.com/science/article/pii/S0957417422020814},
doi = {https://doi.org/10.1016/j.eswa.2022.119063},
issn = {0957-4174},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {Expert Systems with Applications},
pages = {119063},
abstract = {Climate Change heavily impacts global cities, the downsides of which can be minimized by adopting renewables like wind energy. However, despite its advantages, the nonlinear nature of wind renders the forecasting approaches to design and control wind farms ineffective. To expand the research horizon, the current study a) analyses and performs statistical decomposition of real-world wind time-series data, b) presents the application of Long Short-Term Memory (LSTM) networks, Nonlinear Auto-Regressive (NAR) models, and Wavelet Neural Networks (WNN) as efficient models for accurate wind forecasting with a comprehensive comparison among them to justify their application and c) proposes an evolutionary multi-objective strategy for Neural Architecture Search (NAS) to minimize the computational cost associated with training and inferring the networks which form the central theme of Green Deep Learning.Balancing the trade-off between parsimony and prediction accuracy, the proposed NAS strategy could optimally design NAR, WNN, and LSTM models with a mean test accuracy of 99%. The robust methodologies discussed in this work not only accurately model the wind behavior but also provide a green & generic approach for designing Deep Neural Networks.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Zhang, Sheng; Guo, Lixiang; Fan, Jing; Zhang, Xin; Zhang, Weiming
Exploring neural architecture search for text classification Proceedings Article
In: Zhang, Tao (Ed.): 7th International Symposium on Advances in Electrical, Electronics, and Computer Engineering, pp. 122945T, International Society for Optics and Photonics SPIE, 2022.
@inproceedings{10.1117/12.2639851,
title = {Exploring neural architecture search for text classification},
author = {Sheng Zhang and Lixiang Guo and Jing Fan and Xin Zhang and Weiming Zhang},
editor = {Tao Zhang},
url = {https://doi.org/10.1117/12.2639851},
doi = {10.1117/12.2639851},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
booktitle = {7th International Symposium on Advances in Electrical, Electronics, and Computer Engineering},
volume = {12294},
pages = {122945T},
publisher = {SPIE},
organization = {International Society for Optics and Photonics},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Ge, J.; Guo, D.; Ye, X.; Song, Y.; Hua, X.; Lu, L.; Lin, C. Y.; Jin, D.; Ho, T. Y.
In: International Journal of Radiation Oncology*Biology*Physics, vol. 114, no. 3, Supplement, pp. e583, 2022, ISSN: 0360-3016, (ASTRO Annual 2022 Meeting).
@article{GE2022e583,
title = {Dosimetry Validation Study for Automated Head and Neck Cancer Organs at Risk Segmentation Using Stratified Learning and Neural Architecture Search},
author = {J. Ge and D. Guo and X. Ye and Y. Song and X. Hua and L. Lu and C. Y. Lin and D. Jin and T. Y. Ho},
url = {https://www.sciencedirect.com/science/article/pii/S0360301622030115},
doi = {https://doi.org/10.1016/j.ijrobp.2022.07.2255},
issn = {0360-3016},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {International Journal of Radiation Oncology*Biology*Physics},
volume = {114},
number = {3, Supplement},
pages = {e583},
abstract = {Purpose/Objective(s)
Organs at risk (OARs) segmentation is an essential process in head and neck (H&N) cancer radiotherapy. We have reported high automated segmentation geometric accuracy of Stratified Learning and Neural Architecture Search method in terms of Dice Score (DSC) from our previous study. In this study, we would evaluate the dosimetric influence of our automated approach before integrating this method into the clinical workflow.
Materials/Methods
To measure the dosimetric effects brought by the OARs’ variance, the intensity-modulated radiotherapy (IMRT) dose plans of 10 head and neck cancer patients were replanned using the original tumor target volumes and three substitute OAR contours permutations (deep learning generated stratified organs at risk segmentation (SOARS), SOARS revised by physician (SOARS-revised), and OAR delineated from scratch by physician (human reader)). We further examined the clinical dosimetric accuracy and the clinical reference OAR contours were overlaid on top of each replanned dose grid to evaluate the dosimetric differences.
Results
After replanning, SOARS and SOARS-revised contours have slightly smaller Diff (max dose) as compared to human reader contours (3.4%, 3.5% vs. 4.1%). For the Diff (mean dose), human reader, SOARS, SOARS-revised achieves similar results, i.e., 5.3%, 5.0%, and 5.0%, respectively. However, more OARs from the human reader have dose variations larger than 10% or 20% as compared to SOARS and SOARS-revised. Overall, our results indicate that using OAR contours from human reader, SOARS, and SOARS-revised lead to generally comparable dose accuracy in clinical practice. SOAR-related OAR contours have fewer OARs with dose error larger than 10% or 20%.
Conclusion
This study further validates the clinical applicability of a deep learning based automated H&N OAR segmentation method by comparing dosimetry of plans using OAR contours generated automatically and by a human reader to the gold standard contours. The dose variations calculated after planning on automated segmentation contours are less than 5%. Our proposed automated H&N OAR segmentation method not only achieves high geometric accuracy but also helps deliver treatment beams with little variances.},
note = {ASTRO Annual 2022 Meeting},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Organs at risk (OARs) segmentation is an essential process in head and neck (H&N) cancer radiotherapy. We have reported high automated segmentation geometric accuracy of Stratified Learning and Neural Architecture Search method in terms of Dice Score (DSC) from our previous study. In this study, we would evaluate the dosimetric influence of our automated approach before integrating this method into the clinical workflow.
Materials/Methods
To measure the dosimetric effects brought by the OARs’ variance, the intensity-modulated radiotherapy (IMRT) dose plans of 10 head and neck cancer patients were replanned using the original tumor target volumes and three substitute OAR contours permutations (deep learning generated stratified organs at risk segmentation (SOARS), SOARS revised by physician (SOARS-revised), and OAR delineated from scratch by physician (human reader)). We further examined the clinical dosimetric accuracy and the clinical reference OAR contours were overlaid on top of each replanned dose grid to evaluate the dosimetric differences.
Results
After replanning, SOARS and SOARS-revised contours have slightly smaller Diff (max dose) as compared to human reader contours (3.4%, 3.5% vs. 4.1%). For the Diff (mean dose), human reader, SOARS, SOARS-revised achieves similar results, i.e., 5.3%, 5.0%, and 5.0%, respectively. However, more OARs from the human reader have dose variations larger than 10% or 20% as compared to SOARS and SOARS-revised. Overall, our results indicate that using OAR contours from human reader, SOARS, and SOARS-revised lead to generally comparable dose accuracy in clinical practice. SOAR-related OAR contours have fewer OARs with dose error larger than 10% or 20%.
Conclusion
This study further validates the clinical applicability of a deep learning based automated H&N OAR segmentation method by comparing dosimetry of plans using OAR contours generated automatically and by a human reader to the gold standard contours. The dose variations calculated after planning on automated segmentation contours are less than 5%. Our proposed automated H&N OAR segmentation method not only achieves high geometric accuracy but also helps deliver treatment beams with little variances.
Huang, Junhao; Xue, Bing; Sun, Yanan; Zhang, Mengjie; Yen, Gary G.
Particle Swarm Optimization for Compact Neural Architecture Search for Image Classification Journal Article
In: IEEE Transactions on Evolutionary Computation, pp. 1-1, 2022.
@article{9930866,
title = {Particle Swarm Optimization for Compact Neural Architecture Search for Image Classification},
author = {Junhao Huang and Bing Xue and Yanan Sun and Mengjie Zhang and Gary G. Yen},
url = {https://ieeexplore.ieee.org/abstract/document/9930866},
doi = {10.1109/TEVC.2022.3217290},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {IEEE Transactions on Evolutionary Computation},
pages = {1-1},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Liu, Guangyuan; Li, Yangyang; Chen, Yanqiao; Shang, Ronghua; Jiao, Licheng
Pol-NAS: A Neural Architecture Search Method With Feature Selection for PolSAR Image Classification Journal Article
In: IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens., vol. 15, pp. 9339–9354, 2022.
@article{DBLP:journals/staeors/LiuLCSJ22,
title = {Pol-NAS: A Neural Architecture Search Method With Feature Selection for PolSAR Image Classification},
author = {Guangyuan Liu and Yangyang Li and Yanqiao Chen and Ronghua Shang and Licheng Jiao},
url = {https://doi.org/10.1109/JSTARS.2022.3217047},
doi = {10.1109/JSTARS.2022.3217047},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens.},
volume = {15},
pages = {9339--9354},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Shu, Xin; Niu, Mengxuan; Zhang, Yi; Zhou, Renjie
NAS-PRNet: Neural Architecture Search generated Phase Retrieval Net for Off-axis Quantitative Phase Imaging Technical Report
2022.
@techreport{DBLP:journals/corr/abs-2210-14231,
title = {NAS-PRNet: Neural Architecture Search generated Phase Retrieval Net for Off-axis Quantitative Phase Imaging},
author = {Xin Shu and Mengxuan Niu and Yi Zhang and Renjie Zhou},
url = {https://doi.org/10.48550/arXiv.2210.14231},
doi = {10.48550/arXiv.2210.14231},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {CoRR},
volume = {abs/2210.14231},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Qiu, Xin; Miikkulainen, Risto
Shortest Edit Path Crossover: A Theory-driven Solution to the Permutation Problem in Evolutionary Neural Architecture Search Bachelor Thesis
2022.
@bachelorthesis{DBLP:journals/corr/abs-2210-14016,
title = {Shortest Edit Path Crossover: A Theory-driven Solution to the Permutation Problem in Evolutionary Neural Architecture Search},
author = {Xin Qiu and Risto Miikkulainen},
url = {https://doi.org/10.48550/arXiv.2210.14016},
doi = {10.48550/arXiv.2210.14016},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {CoRR},
volume = {abs/2210.14016},
keywords = {},
pubstate = {published},
tppubtype = {bachelorthesis}
}
Du, Mingyang; Zhong, Ping; Cai, Xiaohao; Bi, Daping; Li, Zhifei
Balanced neural architecture search and optimization for specific emitter identification Proceedings Article
In: 2022 IEEE 12th International Conference on RFID Technology and Applications (RFID-TA), pp. 220-223, 2022.
@inproceedings{9924146,
title = {Balanced neural architecture search and optimization for specific emitter identification},
author = {Mingyang Du and Ping Zhong and Xiaohao Cai and Daping Bi and Zhifei Li},
url = {https://ieeexplore.ieee.org/abstract/document/9924146},
doi = {10.1109/RFID-TA54958.2022.9924146},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
booktitle = {2022 IEEE 12th International Conference on RFID Technology and Applications (RFID-TA)},
pages = {220-223},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Abdelgawad, M.; Mozafari, S. H.; Clark, J. J.; Meyer, B. H.; Gross, W. J.
BERTPerf: Inference Latency Predictor for BERT on ARM big.LITTLE Multi-Core Processors Proceedings Article
In: 2022 IEEE Workshop on Signal Processing Systems (SiPS), pp. 1-6, 2022.
@inproceedings{9919203,
title = {BERTPerf: Inference Latency Predictor for BERT on ARM big.LITTLE Multi-Core Processors},
author = {M. Abdelgawad and S. H. Mozafari and J. J. Clark and B. H. Meyer and W. J. Gross},
url = {https://ieeexplore.ieee.org/abstract/document/9919203},
doi = {10.1109/SiPS55645.2022.9919203},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
booktitle = {2022 IEEE Workshop on Signal Processing Systems (SiPS)},
pages = {1-6},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Jin, Xiu; Ba, Wenjing; Wang, Lianglong; Zhang, Tong; Zhang, Xiaodan; Li, Shaowen; Rao, Yuan; Liu, Li
A Novel Tran_NAS Method for the Identification of Fe- and Mg-Deficient Pear Leaves from N- and P-Deficient Pear Leaf Data Journal Article
In: ACS Omega, vol. 7, no. 44, pp. 39727-39741, 2022.
@article{doi:10.1021/acsomega.2c03596,
title = {A Novel Tran_NAS Method for the Identification of Fe- and Mg-Deficient Pear Leaves from N- and P-Deficient Pear Leaf Data},
author = {Xiu Jin and Wenjing Ba and Lianglong Wang and Tong Zhang and Xiaodan Zhang and Shaowen Li and Yuan Rao and Li Liu},
url = {https://doi.org/10.1021/acsomega.2c03596},
doi = {10.1021/acsomega.2c03596},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {ACS Omega},
volume = {7},
number = {44},
pages = {39727-39741},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Li, Yanxi; Dong, Minjing; Wang, Yunhe; Xu, Chang
Neural Architecture Search via Proxy Validation Journal Article
In: IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1-16, 2022.
@article{9931480,
title = {Neural Architecture Search via Proxy Validation},
author = {Yanxi Li and Minjing Dong and Yunhe Wang and Chang Xu},
doi = {10.1109/TPAMI.2022.3217648},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
pages = {1-16},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Yuan, Liuchun; Huang, Zehao; Wang, Naiyan
PredNAS: A Universal and Sample Efficient Neural Architecture Search Framework Technical Report
2022.
@techreport{DBLP:journals/corr/abs-2210-14460,
title = {PredNAS: A Universal and Sample Efficient Neural Architecture Search Framework},
author = {Liuchun Yuan and Zehao Huang and Naiyan Wang},
url = {https://doi.org/10.48550/arXiv.2210.14460},
doi = {10.48550/arXiv.2210.14460},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {CoRR},
volume = {abs/2210.14460},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Spiridonov, Anton; Akin, Berkin; Xu, Hao; White, Marie Charisse; Zhou, Ping; Gupta, Suyog; Zhou, Yanqi; Long, Yun; Wang, Zhuo
Searching for Efficient Neural Architectures for On-Device ML on Edge TPUs Proceedings Article
In: IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2022) , 2022.
@inproceedings{51797,
title = {Searching for Efficient Neural Architectures for On-Device ML on Edge TPUs},
author = {Anton Spiridonov and Berkin Akin and Hao Xu and Marie Charisse White and Ping Zhou and Suyog Gupta and Yanqi Zhou and Yun Long and Zhuo Wang},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
booktitle = { IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2022) },
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Lourens, Matt; Sinayskiy, Ilya; Park, Daniel K.; Blank, Carsten; Petruccione, Francesco
Architecture representations for quantum convolutional neural networks Technical Report
2022.
@techreport{DBLP:journals/corr/abs-2210-15073,
title = {Architecture representations for quantum convolutional neural networks},
author = {Matt Lourens and Ilya Sinayskiy and Daniel K. Park and Carsten Blank and Francesco Petruccione},
url = {https://doi.org/10.48550/arXiv.2210.15073},
doi = {10.48550/arXiv.2210.15073},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {CoRR},
volume = {abs/2210.15073},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Yang, Zhi; Li, Zheyang
Efficient Channel Pruning via Architecture-Guided Search Space Shrinking Proceedings Article
In: Yu, Shiqi; Zhang, Zhaoxiang; Yuen, Pong C.; Han, Junwei; Tan, Tieniu; Guo, Yike; Lai, Jianhuang; Zhang, Jianguo (Ed.): Pattern Recognition and Computer Vision, pp. 540–551, Springer International Publishing, Cham, 2022, ISBN: 978-3-031-18907-4.
@inproceedings{10.1007/978-3-031-18907-4_42,
title = {Efficient Channel Pruning via Architecture-Guided Search Space Shrinking},
author = {Zhi Yang and Zheyang Li},
editor = {Shiqi Yu and Zhaoxiang Zhang and Pong C. Yuen and Junwei Han and Tieniu Tan and Yike Guo and Jianhuang Lai and Jianguo Zhang},
url = {https://link.springer.com/chapter/10.1007/978-3-031-18907-4_42},
isbn = {978-3-031-18907-4},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
booktitle = {Pattern Recognition and Computer Vision},
pages = {540--551},
publisher = {Springer International Publishing},
address = {Cham},
abstract = {Recently, channel pruning methods search for the optimal channel numbers by training a weight-sharing network to evaluate architectures of subnetworks. However, the weight shared between subnetworks incurs severe evaluation bias and an accuracy drop. In this paper, we provide a comprehensive understanding of the search space's impact on the evaluation by dissecting the training process of the weight-sharing network analytically. Specifically, it is proved that the sharing weights induce biased noise on gradients, whose magnitude is proportional to the search range of channel numbers and bias is relative to the average channel numbers of the search space. Motivated by the theoretical result, we design a channel pruning method by training a weight-sharing network with search space shrinking. The search space is iteratively shrunk guided by the optimal architecture searched in the weight-sharing network. The reduced search space boosts the accuracy of the evaluation and significantly cuts down the post-processing computation of finetuning. In the end, we demonstrate the superiority of our channel pruning method over state-of-the-art methods with experiments on ImageNet and COCO.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Bublea, Adrian; Caleanu, Catalin-Daniel
AutoML and Neural Architecture Search for Gaze Estimation Proceedings Article
In: 16th IEEE International Symposium on Applied Computational Intelligence and Informatics, SACI 2022, Timisoara, Romania, May 25-28, 2022, pp. 143–148, IEEE, 2022.
@inproceedings{DBLP:conf/saci/BubleaC22,
title = {AutoML and Neural Architecture Search for Gaze Estimation},
author = {Adrian Bublea and Catalin-Daniel Caleanu},
url = {https://doi.org/10.1109/SACI55618.2022.9919471},
doi = {10.1109/SACI55618.2022.9919471},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
booktitle = {16th IEEE International Symposium on Applied Computational Intelligence
and Informatics, SACI 2022, Timisoara, Romania, May 25-28, 2022},
pages = {143--148},
publisher = {IEEE},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Wang, Zhen; Du, Haotong; Yao, Quanming; Li, Xuelong
Search to Pass Messages for Temporal Knowledge Graph Completion Technical Report
2022.
@techreport{DBLP:journals/corr/abs-2210-16740,
title = {Search to Pass Messages for Temporal Knowledge Graph Completion},
author = {Zhen Wang and Haotong Du and Quanming Yao and Xuelong Li},
url = {https://doi.org/10.48550/arXiv.2210.16740},
doi = {10.48550/arXiv.2210.16740},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {CoRR},
volume = {abs/2210.16740},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Amein, Marihan; Xiong, Zhuoran; Therrien, Olivier; Meyer, Brett H.; Gross, Warren J.
Work-in-Progress: SuperNAS: Fast Multi-Objective SuperNet Architecture Search for Semantic Segmentation Proceedings Article
In: 2022 International Conference on Compilers, Architecture, and Synthesis for Embedded Systems (CASES), pp. 35-36, 2022.
@inproceedings{9933156,
title = {Work-in-Progress: SuperNAS: Fast Multi-Objective SuperNet Architecture Search for Semantic Segmentation},
author = {Marihan Amein and Zhuoran Xiong and Olivier Therrien and Brett H. Meyer and Warren J. Gross},
url = {https://ieeexplore.ieee.org/abstract/document/9933156},
doi = {10.1109/CASES55004.2022.00024},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
booktitle = {2022 International Conference on Compilers, Architecture, and Synthesis for Embedded Systems (CASES)},
pages = {35-36},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Zhang, Xuchong; Dai, He; Chen, Jianing; Sun, Hongbin
Efficient Backbone Architecture Search for Stereo Depth Estimation in Autonomous Driving Proceedings Article
In: 2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC), pp. 355-362, 2022.
@inproceedings{9922562,
title = {Efficient Backbone Architecture Search for Stereo Depth Estimation in Autonomous Driving},
author = {Xuchong Zhang and He Dai and Jianing Chen and Hongbin Sun},
url = {https://ieeexplore.ieee.org/abstract/document/9922562},
doi = {10.1109/ITSC55140.2022.9922562},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
booktitle = {2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC)},
pages = {355-362},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Qian, Yaguan; Huang, Shenghui; Wang, Bin; Ling, Xiang; Guan, Xiaohui; Gu, Zhaoquan; Zeng, Shaoning; Zhou, Wujie; Wang, Haijiang
Robust Network Architecture Search via Feature Distortion Restraining Proceedings Article
In: Ävidan, Shai; Brostow, Gabriel; Cissé, Moustapha; Farinella, Giovanni Maria; Hassner, Tal" (Ed.): Computer Vision -- ECCV 2022, pp. 122–138, Springer Nature Switzerland, Cham, 2022, ISBN: 978-3-031-20065-6.
@inproceedings{10.1007/978-3-031-20065-6_8,
title = {Robust Network Architecture Search via Feature Distortion Restraining},
author = {Yaguan Qian and Shenghui Huang and Bin Wang and Xiang Ling and Xiaohui Guan and Zhaoquan Gu and Shaoning Zeng and Wujie Zhou and Haijiang Wang},
editor = {Shai Ävidan and Gabriel Brostow and Moustapha Cissé and Giovanni Maria Farinella and Tal" Hassner},
url = {https://link.springer.com/chapter/10.1007/978-3-031-20065-6_8},
isbn = {978-3-031-20065-6},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
booktitle = {Computer Vision -- ECCV 2022},
pages = {122--138},
publisher = {Springer Nature Switzerland},
address = {Cham},
abstract = {The vulnerability of Deep Neural Networks, i.e., susceptibility to adversarial attacks, severely limits the application of DNNs in security-sensitive domains. Most of existing methods improve model robustness from weight optimization, such as adversarial training. However, the architecture of DNNs is also a key factor to robustness, which is often neglected or underestimated. We propose Robust Network Architecture Search (RNAS) to obtain a robust network against adversarial attacks. We observe that an adversarial perturbation distorting the non-robust features in latent feature space can further aggravate misclassification. Based on this observation, we search the robust architecture through restricting feature distortion in the search process. Specifically, we define a network vulnerability metric based on feature distortion as a constraint in the search process. This process is modeled as a multi-objective bilevel optimization problem and a novel algorithm is proposed to solve this optimization. Extensive experiments conducted on CIFAR-10/100 and SVHN show that RNAS achieves the best robustness under various adversarial attacks compared with extensive baselines and SOTA methods.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Schrodi, Simon; Stoll, Danny; Ru, Binxin; Sukthanker, Rhea; Brox, Thomas; Hutter, Frank
Towards Discovering Neural Architectures from Scratch Technical Report
2022.
@techreport{DBLP:journals/corr/abs-2211-01842,
title = {Towards Discovering Neural Architectures from Scratch},
author = {Simon Schrodi and Danny Stoll and Binxin Ru and Rhea Sukthanker and Thomas Brox and Frank Hutter},
url = {https://doi.org/10.48550/arXiv.2211.01842},
doi = {10.48550/arXiv.2211.01842},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {CoRR},
volume = {abs/2211.01842},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
C., Vishak Prasad; White, Colin; Jain, Paarth; Nayak, Sibasis; Ramakrishnan, Ganesh
Speeding up NAS with Adaptive Subset Selection Technical Report
2022.
@techreport{DBLP:journals/corr/abs-2211-01454,
title = {Speeding up NAS with Adaptive Subset Selection},
author = {Vishak Prasad C. and Colin White and Paarth Jain and Sibasis Nayak and Ganesh Ramakrishnan},
url = {https://doi.org/10.48550/arXiv.2211.01454},
doi = {10.48550/arXiv.2211.01454},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {CoRR},
volume = {abs/2211.01454},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
C., Vishak Prasad; White, Colin; Jain, Paarth; Nayak, Sibasis; Ramakrishnan, Ganesh
Speeding up NAS with Adaptive Subset Selection Technical Report
2022.
@techreport{DBLP:journals/corr/abs-2211-01454b,
title = {Speeding up NAS with Adaptive Subset Selection},
author = {Vishak Prasad C. and Colin White and Paarth Jain and Sibasis Nayak and Ganesh Ramakrishnan},
url = {https://doi.org/10.48550/arXiv.2211.01454},
doi = {10.48550/arXiv.2211.01454},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {CoRR},
volume = {abs/2211.01454},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Ang, Koon Meng; El-kenawy, El-Sayed M.; Abdelhamid, Abdelaziz A.; Ibrahim, Abdelhameed; Alharbi, Amal H.; Khafaga, Doaa Sami; Tiang, Sew Sun; Lim, Wei Hong
Optimal Design of Convolutional Neural Network Architectures Using Teaching–Learning-Based Optimization for Image Classification Journal Article
In: Symmetry, vol. 14, no. 11, 2022, ISSN: 2073-8994.
@article{sym14112323,
title = {Optimal Design of Convolutional Neural Network Architectures Using Teaching-Learning-Based Optimization for Image Classification},
author = {Koon Meng Ang and El-Sayed M. El-kenawy and Abdelaziz A. Abdelhamid and Abdelhameed Ibrahim and Amal H. Alharbi and Doaa Sami Khafaga and Sew Sun Tiang and Wei Hong Lim},
url = {https://www.mdpi.com/2073-8994/14/11/2323},
doi = {10.3390/sym14112323},
issn = {2073-8994},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {Symmetry},
volume = {14},
number = {11},
abstract = {Convolutional neural networks (CNNs) have exhibited significant performance gains over conventional machine learning techniques in solving various real-life problems in computational intelligence fields, such as image classification. However, most existing CNN architectures were handcrafted from scratch and required significant amounts of problem domain knowledge from designers. A novel deep learning method abbreviated as TLBOCNN is proposed in this paper by leveraging the excellent global search ability of teaching-learning-based optimization (TLBO) to obtain an optimal design of network architecture for a CNN based on the given dataset with symmetrical distribution of each class of data samples. A variable-length encoding scheme is first introduced in TLBOCNN to represent each learner as a potential CNN architecture with different layer parameters. During the teacher phase, a new mainstream architecture computation scheme is designed to compute the mean parameter values of CNN architectures by considering the information encoded into the existing population members with variable lengths. The new mechanisms of determining the differences between two learners with variable lengths and updating their positions are also devised in both the teacher and learner phases to obtain new learners. Extensive simulation studies report that the proposed TLBOCNN achieves symmetrical performance in classifying the majority of MNIST-variant datasets, displays the highest accuracy, and produces CNN models with the lowest complexity levels compared to other state-of-the-art methods due to its promising search ability.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Liu, Dichao; Yamasaki, Toshihiko; Wang, Yu; Mase, Kenji; Kato, Jien
Toward Extremely Lightweight Distracted Driver Recognition With Distillation-Based Neural Architecture Search and Knowledge Transfer Journal Article
In: IEEE Transactions on Intelligent Transportation Systems, pp. 1-14, 2022.
@article{9940550,
title = {Toward Extremely Lightweight Distracted Driver Recognition With Distillation-Based Neural Architecture Search and Knowledge Transfer},
author = {Dichao Liu and Toshihiko Yamasaki and Yu Wang and Kenji Mase and Jien Kato},
url = {https://ieeexplore.ieee.org/document/9940550},
doi = {10.1109/TITS.2022.3217342},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {IEEE Transactions on Intelligent Transportation Systems},
pages = {1-14},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Zhang, Haichao; Li, Jiashi; Xia, Xin; Hao, Kuangrong; Xiao, Xuefeng
Multi-Objective Evolutionary for Object Detection Mobile Architectures Search Technical Report
2022.
@techreport{DBLP:journals/corr/abs-2211-02791,
title = {Multi-Objective Evolutionary for Object Detection Mobile Architectures Search},
author = {Haichao Zhang and Jiashi Li and Xin Xia and Kuangrong Hao and Xuefeng Xiao},
url = {https://doi.org/10.48550/arXiv.2211.02791},
doi = {10.48550/arXiv.2211.02791},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {CoRR},
volume = {abs/2211.02791},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Li, Jialin; Chen, Renxiang; Huang, Xianzhen; Qu, Yongzhi
Development of Deep Residual Neural Networks for Gear Pitting Fault Diagnosis Using Bayesian Optimization Journal Article
In: IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1-15, 2022.
@article{9938963,
title = {Development of Deep Residual Neural Networks for Gear Pitting Fault Diagnosis Using Bayesian Optimization},
author = {Jialin Li and Renxiang Chen and Xianzhen Huang and Yongzhi Qu},
doi = {10.1109/TIM.2022.3219476},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {IEEE Transactions on Instrumentation and Measurement},
volume = {71},
pages = {1-15},
keywords = {},
pubstate = {published},
tppubtype = {article}
}