EPC-DARTS: Efficient partial channel connection for differentiable architecture search.

Researchers

Journal

Modalities

Models

Abstract

With weight-sharing and continuous relaxation strategies, the differentiable architecture search (DARTS) proposes a fast and effective solution to perform neural network architecture search in various deep learning tasks. However, unresolved issues, such as the inefficient memory utilization, and the poor stability of the search architecture due to channels randomly selected, which has even caused performance collapses, are still perplexing researchers and practitioners. In this paper, a novel efficient channel attention mechanism based on partial channel connection for differentiable neural architecture search, termed EPC-DARTS, is proposed to address these two issues. Specifically, we design an efficient channel attention module, which is applied to capture cross-channel interactions and assign weight based on channel importance, to dramatically improve search efficiency and reduce memory occupation. Moreover, only partial channels with higher weights in the mixed calculation of operation are used through the efficient channel attention mechanism, and thus unstable network architectures obtained by the random selection operation can also be avoided in the proposed EPC-DARTS. Experimental results show that the proposed EPC-DARTS achieves remarkably competitive performance (CIFAR-10/CIFAR-100: a test accuracy rate of 97.60%/84.02%), compared to other state-of-the-art NAS methods using only 0.2 GPU-Days.Copyright © 2023 Elsevier Ltd. All rights reserved.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *