AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (15.1 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

DFF-ResNet: An Insect Pest Recognition Model Based on Residual Networks

School of Information Science and Technology and School of Transportation and Civil Engineering, Nantong University, Nantong 226019, China, and also with the Faculty of Engineering, Tokushima University, Tokushima 770-8506, Japan
School of Information Science and Technology, Nantong University, Nantong 226019, China
Faculty of Engineering, Tokushima University, Tokushima 770-8506, Japan
Show Author Information

Abstract

Insect pest control is considered as a significant factor in the yield of commercial crops. Thus, to avoid economic losses, we need a valid method for insect pest recognition. In this paper, we proposed a feature fusion residual block to perform the insect pest recognition task. Based on the original residual block, we fused the feature from a previous layer between two 1 ×1 convolution layers in a residual signal branch to improve the capacity of the block. Furthermore, we explored the contribution of each residual group to the model performance. We found that adding the residual blocks of earlier residual groups promotes the model performance significantly, which improves the capacity of generalization of the model. By stacking the feature fusion residual block, we constructed the Deep Feature Fusion Residual Network (DFF-ResNet). To prove the validity and adaptivity of our approach, we constructed it with two common residual networks (Pre-ResNet and Wide Residual Network (WRN)) and validated these models on the Canadian Institute For Advanced Research (CIFAR) and Street View House Number (SVHN) benchmark datasets. The experimental results indicate that our models have a lower test error than those of baseline models. Then, we applied our models to recognize insect pests and obtained validity on the IP102 benchmark dataset. The experimental results show that our models outperform the original ResNet and other state-of-the-art methods.

References

[1]
X. P. Wu, C. Zhan, Y. K. Lai, M. M. Cheng, and J. F. Yang, IP102: A large-scale benchmark dataset for insect pest recognition, in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Long Beach, CA, USA, 2019, pp. 8779-8788.
[2]
L. M. Deng, Y. J. Wang, Z. Z. Han, and R. S. Yu, Research on insect pest image detection and recognition based on bio-inspired methods, Biosystems Engineering, vol. 169, pp. 139-148, 2018.
[3]
K. Dimililer and S. Zarrouk, ICSPI: Intelligent classification system of pest insects based on image processing and neural arbitration, Applied Engineering in Agriculture, vol. 33, no. 4, pp. 453-460, 2017.
[4]
F. J. Ren, W. J. Liu, and G. Q. Wu, Feature reuse residual networks for insect pest recognition, IEEE Access, vol. 7, pp. 122 758-122 768, 2019.
[5]
A. Krizhevsky, I. Sutskever, and G. E. Hinton, ImageNet classification with deep convolutional neural networks, Communications of the ACM, vol. 60, no. 6, pp. 84-90, 2017.
[6]
K. M. He, X. Y. Zhang, S. Q. Ren, and J. Sun, Deep residual learning for image recognition, in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 770-778.
[7]
G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, Densely connected convolutional networks, in Proc. 2017 IEEE Conf. Computer Vision and Pattern Recognition, Honolulu, HI, USA, 2017, pp. 2261-2269.
[8]
Z. Q. Shen, Z. Liu, J. G. Li, Y. G. Jiang, Y. R. Chen, and X. Y. Xue, DSOD: Learning deeply supervised object detectors from scratch, in Proc. 2017 IEEE Int. Conf. Computer Vision, Venice, Italy, 2017, pp. 1937-1945.
[9]
S. Zagoruyko and N. Komodakis, Wide residual networks, arXiv preprint arXiv:1605.07146, 2016.
[10]
D. Feng and F. J. Ren, Dynamic facial expression recognition based on two-stream-CNN with LBP-TOP, presented at 2018 5th IEEE Int. Conf. Cloud Computing and Intelligence Systems (CCIS), Nanjing, China, 2018, pp. 355-359.
[11]
F. J. Ren and J. W. Deng, Background knowledge based multi-stream neural network for text classification, Applied Sciences, vol. 8, no. 12, p. 2472, 2018.
[12]
Y. Kim, Convolutional neural networks for sentence classification, arXiv preprint arXiv:1408.5882, 2014.
[13]
R. Y. Zhang, F. R. Meng, Y. Zhou, and B. Liu, Relation classification via recurrent neural network with attention and tensor layers, Big Data Mining and Analytics, vol. 1, no. 3, pp. 234-244, 2018.
[14]
F. J. Ren, Y. D. Dong, and W. Wang, Emotion recognition based on physiological signals using brain asymmetry index and echo state network, Neural Computing and Applications, vol. 31, no. 9, pp. 4491-4501, 2019.
[15]
X. Kang, F. J. Ren, and Y. N. Wu, Exploring latent semantic information for textual emotion recognition in blog articles, IEEE/CAA Journal of Automatica Sinica, vol. 5, no. 1, pp. 204-216, 2018.
[16]
M. Bouazizi and T. Ohtsuki, Multi-class sentiment analysis on twitter: Classification performance and challenges, Big Data Mining and Analytics, vol. 2, no. 3, pp. 181-194, 2019.
[17]
S. Bell, C. L. Zitnick, K. Bala, and R. Girshick, Inside-outside net: Detecting objects in context with skip pooling and recurrent neural networks, in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 2874-2883.
[18]
S. Milz, G. Arbeiter, C. Witt, B. Abdallah, and S. Yogamani, Visual SLAM for automated driving: Exploring the applications of deep learning, in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 2018, pp. 360-370.
[19]
X. H. Cao, T. H. Li, H. L. Li, S. R. Xia, F. J. Ren, Y. Sun, and X. Y. Xu, A robust parameter-free thresholding method for image segmentation, IEEE Access, vol. 7, pp. 3448-3458, 2018.
[20]
N. N. Ma, X. Y. Zhang, H. T. Zheng, and J. Sun, ShuffleNet v2: Practical guidelines for efficient CNN architecture design, in Proc. European Conf. Computer Vision (ECCV), Munich, Germany, 2018, pp. 122-138.
[21]
K. M. He, X. Y. Zhang, S. Q. Ren, and J. Sun, Identity mappings in deep residual networks, presented at European Conf. Computer Vision, Amsterdam, The Netherlands, 2016, pp. 630-645.
[22]
P. C. Ng and S. Henikoff, SIFT: Predicting amino acid changes that affect protein function, Nucleic Acids Research, vol. 31, no. 13, pp. 3812-3814, 2003.
[23]
N. Dalal and B. Triggs, Histograms of oriented gradients for human detection, presented at 2005 IEEE Computer Society Conf. Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 2005, pp. 886-893.
[24]
R. K. Samanta and I. Ghosh, Tea insect pests classification based on artificial neural networks, International Journal of Computer Engineering Science (IJCES), vol. 2, no. 6, pp. 1-13, 2012.
[25]
M. Manoja and J. Rajalakshmi, Early detection of pest on leaves using support vector machine, International Journal of Electrical and Electronics Research, vol. 2, no. 4, pp. 187-194, 2014.
[26]
C. Szegedy, W. Liu, Y. Q. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, Going deeper with convolutions, in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Boston, MA, USA, 2015, pp. 1-9.
[27]
R. Li, R. J. Wang, J. Zhang, C. J. Xie, L. Liu, F. Y. Wang, H. B. Chen, T. J. Chen, H. Y. Hu, X. F. Jia, et al., An effective data augmentation strategy for CNN-based pest localization and recognition in the field, IEEE Access, vol. 7, pp. 160 274-160 283, 2019.
[28]
K. Dimililer and S. Zarrouk, ICSPI: Intelligent classification system of pest insects based on image processing and neural arbitration, Applied Engineering in Agriculture, vol. 33, no. 4, pp. 453-460, 2017.
[29]
F. L. Shen, R. Gan, and G. Zeng, Weighted residuals for very deep networks, presented at 2016 3rd Int. Conf. Systems and Informatics (ICSAI), Shanghai, China, 2016, pp. 936-941.
[30]
G. Huang, Y. Sun, Z. Liu, D. Sedra, and K. Q. Weinberger, Deep networks with stochastic depth, presented at European Conf. Computer Vision, Amsterdam, The Netherlands, 2016, pp. 646-661.
[31]
D. Han, J. Kim, and J. Kim, Deep pyramidal residual networks, in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Honolulu, HI, USA, 2017, pp. 6307-6315.
[32]
K. Zhang, M. Sun, T. X. Han, X. F. Yuan, L. R. Guo, and T. Liu, Residual networks of residual networks: Multilevel residual networks, IEEE Transactions on Circuits and Systems for Video Technology, vol. 28, no. 6, pp. 1303-1314, 2018.
[33]
G. Huang, S. C. Liu, L. Van der Maaten, and K. Q. Weinberger, CondenseNet: An efficient densenet using learned group convolutions, in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 2752-2761.
[34]
K. M. He, X. Y. Zhang, S. Q. Ren, and J. Sun, Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification, in Proc. 2015 IEEE Int. Conf. Computer Vision, Santiago, Chile, 2015, pp. 1026-1034.
[35]
H. Y. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz, Mixup: Beyond empirical risk minimization, arXiv preprint arXiv:1710.09412, 2017.
[36]
A. Krizhevsky, I. Sutskever, and G. E. Hinton, ImageNet classification with deep convolutional neural networks, Communications of the ACM, vol. 60, no. 6, pp. 84-90, 2017.
[37]
K. Simonyan and A. Zisserman, Very deep convolutional networks for large-scale image recognition, arXiv preprint arXiv:1409.1556, 2014.
Big Data Mining and Analytics
Pages 300-310
Cite this article:
Liu W, Wu G, Ren F, et al. DFF-ResNet: An Insect Pest Recognition Model Based on Residual Networks. Big Data Mining and Analytics, 2020, 3(4): 300-310. https://doi.org/10.26599/BDMA.2020.9020021

1093

Views

83

Downloads

43

Crossref

37

Web of Science

50

Scopus

0

CSCD

Altmetrics

Received: 20 July 2020
Accepted: 22 September 2020
Published: 16 November 2020
© The authors 2020

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return