Subnet-Aware Dynamic Supernet Training for Neural Architecture Search

CVPR 2025

1Yonsei University
2Articron Inc.
3Samsung Research
4Samsung Advanced Institute of Technology
5Chung-Ang University
Illustrations of the challenges of N-shot NAS methods. (a) We visualize validation losses for the subnets having different complexities at training time. Existing methods do not consider the distinct optimization speed of subnets w.r.t. complexities. This causes an unfairness problem, where the high-complexity subnet is trained insufficiently, and the predicted performance falls behind the low-complexity one, even if it might be supposed to provide better performance. (b) We illustrate gradients \( g^t \) of subnets and the momentum \( \mu^t \) at the \( t \)-th iteration. We can see that the gradients vary according to the subnets, resulting in a noisy momentum and preventing a stable training process.

Abstract

N-shot neural architecture search (NAS) exploits a supernet containing all candidate subnets for a given search space. The subnets are typically trained with a static training strategy (e.g., using the same learning rate (LR) scheduler and optimizer for all subnets). This, however, does not consider that individual subnets have distinct characteristics, leading to two problems: (1) The supernet training is biased towards the low-complexity subnets (unfairness); (2) the momentum update in the supernet is noisy (noisy momentum). We present a dynamic supernet training technique to address these problems by adjusting the training strategy adaptive to the subnets. Specifically, we introduce a complexity-aware LR scheduler (CaLR) that controls the decay ratio of LR adaptive to the complexities of subnets, which alleviates the unfairness problem. We also present a momentum separation technique (MS). It groups the subnets with similar structural characteristics and uses a separate momentum for each group, avoiding the noisy momentum problem. Our approach can be applicable to various N-shot NAS methods with marginal cost, while improving the search performance drastically. We validate the effectiveness of our approach on various search spaces (e.g., NAS-Bench-201, Mobilenet spaces) and datasets (e.g., CIFAR-10/100, ImageNet).

Results

Quantitative comparison of different supernet training methods on CIFAR-10, CIFAR-100 and ImageNet16-120 datasets in NAS-Bench-201. We report the Kendall's Tau, along with the top-1 accuracy (Top-1 Acc.) for each method. We also report the peak memory usage, and the GPU hours for training supernets on CIFAR-10, computed with a single RTX 2080Ti. The results include the average and standard deviations for 3 runs.
We show in this table the search performance in the NAS-Bench-201 space, in terms of ranking consistencies and top-1 accuracies. We can see that the three baselines (SPOS, FairNAS, FSNAS) coupled with our dynamic supernet training method provide better search performance consistently with negligible additional search cost. Note that each baseline exploits distinct supernet training strategy, e.g., sampling a single (SPOS) or multiple (FairNAS) subnets at each training iteration, or using multiple sub-supernets (FSNAS). This suggests that our method can be applied in a plug-and-play manner across diverse supernet training algorithms.

Paper

J. Jeon, Y. Oh, J. Lee, D. Baek, D. Kim, C. Eom, and B. Ham
Subnet-Aware Dynamic Supernet Training for Neural Architecture Search
In In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , 2025
[arXiv][Code]

Acknowledgements

This work was partly supported by IITP grant funded by the Korea government (MSIT) (No.RS-2022-00143524, Development of Fundamental Technology and Integrated Solution for Next-Generation Automatic Artificial Intelligence System, No.2022-0-00124, RS-2022-II220124, Development of Artificial Intelligence Technology for Self-Improving Competency-Aware Learning Capabilities) and the Yonsei Signature Research Cluster Program of 2024 (2024-22-0161).