z-logo
open-access-imgOpen Access
FreMix: Frequency-Based Mixup for Data Augmentation
Author(s) -
Yang Xiu,
Xinyi Zheng,
Linlin Sun,
Zhuohao Fang
Publication year - 2022
Publication title -
wireless communications and mobile computing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.42
H-Index - 64
eISSN - 1530-8677
pISSN - 1530-8669
DOI - 10.1155/2022/5323327
Subject(s) - computer science , artificial intelligence , frequency domain , weighting , transformation (genetics) , benchmark (surveying) , generalization , hyperparameter , fast fourier transform , rotation (mathematics) , machine learning , image (mathematics) , pattern recognition (psychology) , data mining , computer vision , algorithm , medicine , mathematical analysis , biochemistry , chemistry , mathematics , geodesy , gene , radiology , geography
Deep learning models have attracted tremendous attention in computer vision in recent years, while most of them heavily rely on massive data for training. As one of the solutions to the sparse data problem, data augmentation techniques, such as image translation and rotation, can substantially increase the model’s generalization ability and performance. However, on one hand, these approaches primarily work under the pixel domain, which is limited to fully mining and fusing picture data from the frequency viewpoint. On the other hand, the fusion weighting factors are primarily modified in a manual fashion, which increases the application costs in practice. To this end, we propose a novel method termed as frequency-based Mixup (FreMix) that allows images to be fused in the frequency domain and to improve the efficiency of data augmentation by adaptively adjusting the weighting coefficients in this paper. In FreMix, first, a fast Fourier transformation (FFT) is performed on the input image, such that the frequency information rather than raw pixel information can be extracted for further augmentation. Besides, an exploration-exploitation training paradigm is exploited, such that the FreMix can be trained periodically to facilitate learning and avoid manually hyperparameter settings. We conduct comparing experiments on three benchmark datasets including CIFAR, ImageNet, and ILSVRC2015, and the experimental results validate the effectiveness of the proposed method.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom