Few-shot classification models trained with clean samples poorly classify samples from the real world with various scales of noise. To enhance the model for recognizing noisy samples, researchers usually utilize data augmentation or use noisy samples generated by adversarial training for model training. However, existing methods still have problems: (ⅰ) The effects of data augmentation on the robustness of the model are limited. (ⅱ) The noise generated by adversarial training usually causes overfitting and reduces the generalization ability of the model, which is very significant for few-shot classification. (ⅲ) Most existing methods cannot adaptively generate appropriate noise. Given the above three points, this paper proposes a noise-robust few-shot classification algorithm, VADA—Variational Adversarial Data Augmentation. Unlike existing methods, VADA utilizes a variational noise generator to generate an adaptive noise distribution according to different samples based on adversarial learning, and optimizes the generator by minimizing the expectation of the empirical risk. Applying VADA during training can make few-shot classification more robust against noisy data, while retaining generalization ability. In this paper, we utilize FEAT and ProtoNet as baseline models, and accuracy is verified on several common few-shot classification datasets, including MiniImageNet, TieredImageNet, and CUB. After training with VADA, the classification accuracy of the models increases for samples with various scales of noise.
Publications
Article type
Year

Computational Visual Media 2025, 11(1): 227-239
Published: 28 February 2025
Downloads:10