AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
Article Link
Collect
Submit Manuscript
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Regular Paper

Weakly- and Semi-Supervised Fast Region-Based CNN for Object Detection

School of Electronic Information and Communications, Huazhong University of Science and Technology Wuhan 430074, China
Show Author Information

Abstract

Learning an effective object detector with little supervision is an essential but challenging problem in computer vision applications. In this paper, we consider the problem of learning a deep convolutional neural network (CNN) based object detector using weakly-supervised and semi-supervised information in the framework of fast region-based CNN (Fast R-CNN). The target is to obtain an object detector as accurate as the fully-supervised Fast R-CNN, but it requires less image annotation effort. To solve this problem, we use weakly-supervised training images (i.e., only the image-level annotation is given) and a few proportions of fully-supervised training images (i.e., the bounding box level annotation is given), that is a weakly- and semi-supervised (WASS) object detection setting. The proposed solution is termed as WASS R-CNN, in which there are two main components. At first, a weakly-supervised R-CNN is firstly trained; after that semi-supervised data are used for finetuning the weakly-supervised detector. We perform object detection experiments on the PASCAL VOC 2007 dataset. The proposed WASS R-CNN achieves more than 85% of a fully-supervised Fast R-CNN’s performance (measured using mean average precision) with only 10% of fully-supervised annotations together with weak supervision for all training images. The results show that the proposed learning framework can significantly reduce the labeling efforts for obtaining reliable object detectors.

Electronic Supplementary Material

Download File(s)
jcst-34-6-1269-Highlights.pdf (539.4 KB)

References

[1]
Girshick R. Fast R-CNN. In Proc. the 2015 IEEE International Conference on Computer Vision, December 2015, pp.1440-1448.
[2]
Redmon J, Farhadi A. YOLO9000: Better, faster, stronger. In Proc. the 2017 IEEE Conference on Computer Vision and Pattern Recognition, July 2017, pp.6517-6525.
[3]
Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu C, Berg A. SSD: Single shot multibox detector. In Proc. the 14th European Conference on Computer Vision, October 2016, pp.21-37.
[4]
Dai J, Li Y, He K, Sun J. R-FCN: Object detection via region-based fully convolutional networks. In Proc. the 2016 Annual Conference on Neural Information Processing Systems, December 2016, pp.379-387.
[5]
Lin T Y, Dollr P, Girshick R, He K, Hariharan B, Belongie S. Feature pyramid networks for object detection. arXiv: 1612.03144, 2016. https://arxiv.org/pdf/16-12.03144.pdf, August 2019.
[6]

Chu W, Cai D. Deep feature based contextual model for object detection. Neurocomputing, 2018, 275: 1035-1042.

[7]
Bilen H, Vedaldi A. Weakly supervised deep detection networks. In Proc. the 2016 IEEE Conference on Computer Vision and Pattern Recognition, June 2016, pp.2846-2854.
[8]
Tang P, Wang X, Bai X, Liu W. Multiple instance detection network with online instance classifier refinement. In Proc. the 2017 IEEE Conference on Computer Vision and Pattern Recognition, July 2017, pp.3059-3067.
[9]

Dietterich T, Lathrop R, Lozano-Pérez T. Solving the multiple instance problem with axis-parallel rectangles. Artificial Intelligence, 1997, 89(1/2): 31-71.

[10]
Viola P, Platt J, Zhang C. Multiple instance boosting for object detection. In Proc. the 2005 Annual Conference on Neural Information Processing Systems, December 2005, pp.1417-1424.
[11]

Zhu J Y, Wu J, Xu Y, Chang E, Tu Z. Unsupervised object class discovery via saliency-guided multiple class learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(4): 862-875.

[12]
Wu J, Yu Y, Huang C, Yu K. Deep multiple instance learning for image classification and auto-annotation. In Proc. the 2015 IEEE Conference on Computer Vision and Pattern Recognition, June 2015, pp.3460-3469.
[13]

Wei Y, Xia W, Lin M, Huang J, Ni B, Dong J, Zhao Y, Yan S. HCP: A flexible CNN framework for multi-label image classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38(9): 1901-1907.

[14]
Wang X, Zhu Z, Yao C, Bai X. Relaxed multiple-instance SVM with application to object discovery. In Proc. the 2015 IEEE International Conference on Computer Vision, December 2015, pp.1224-1232.
[15]

Cinbis R G, Verbeek J, Schmid C. Weakly supervised object localization with multi-fold multiple instance learning. IEEE Transactions on Parttern Analysis and Machine Intelligence, 2017, 39(1): 189-203.

[16]
Papadopoulos D P, Uijlings J R, Keller F, Ferrari V. We don’t need no bounding-boxes: Training object class detectors using only human verification. In Proc. the 2016 IEEE Conference on Computer Vision and Pattern Recognition, June 2016, pp.854-863.
[17]
Papadopoulos D P, Uijlings J R, Keller F, Ferrari V. Training object class detectors with click supervision. In Proc. the 2017 IEEE Conference on Computer Vision and Pattern Recognition, July 2017, pp.180-189.
[18]
Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. In Proc. the 3rd International Conference on Learning Representations, May 2015, Article No. 4.
[19]
He K, Zhang X, Ren S, Sun J. Spatial pyramid pooling in deep convolutional networks for visual recognition. In Proc. the 13th European Conference on Computer Vision, September 2014, pp.346-361.
[20]

Uijlings J R, van de Sande K E, Gevers T, Smeulders A W. Selective search for object recognition. International Journal of Computer Vision, 2013, 104(2): 154-171.

[21]
Oquab M, Bottou L, Laptev I, Sivic J. Learning and transferring mid-level image representations using convolutional neural networks. In Proc. the 2014 IEEE Conference on Computer Vision and Pattern Recognition, June 2014, pp.1717-1724.
[22]
Ren S, He K, Girshick R, Sun J. Faster R-CNN: Towards real-time object detection with region proposal networks. In Proc. the 2015 Annual Conference on Neural Information Processing Systems, December 2015, pp.91-99.
[23]
Everingham M, Zisserman A, Williams C K, Van Gool L, Allan M, Bishop C M, Chapelle O, Dalal N, Deselaers T, Dorkó G. The pascal visual object classes challenge 2007 (voc2007) results. http://www.pascal-network.org/challen-ges/VOC/voc2007/workshop/index.html, Sept. 2019.
[24]
Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks. In Proc. the 26th Annual Conference on Neural Information Processing Systems, December 2012, pp.1106-1114.
[25]
Jia Y, Shelhamer E, Donahue J, Karayev S, Long J, Girshick R, Guadarrama S, Darrell T. Caffe: Convolutional architecture for fast feature embedding. In Proc. the 2014 ACM International Conference on Multimedia, November 2014, pp.675-678.
[26]
Diba A, Sharma V, Pazandeh A, Pirsiavash H, van Gool L. Weakly supervised cascaded convolutional networks. In Proc. the 2017 IEEE Conference on Computer Vision and Pattern Recognition, July 2017, pp.5131-5139.
[27]
Jie Z, Wei Y, Jin X, Feng J, Liu W. Deep self-taught learning for weakly supervised object localization. In Proc. the 2017 IEEE Conference on Computer Vision and Pattern Recognition, July 2017, pp.4294-4302.
[28]

Felzenszwalb P F, Girshick R, McAllester D, Ramanan D. Object detection with discriminatively trained part-based models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 32(9): 1627-1645.

[29]
Hoiem D, Chodpathumwan Y, Dai Q. Diagnosing error in object detectors. In Proc. the 12th European Conference on Computer Vision, October 2012, pp.340-353.
[30]
Bai X, Lai S. Saliency guided end-to-end learning for weakly supervised object detection. In Proc. the 26th International Joint Conference on Artificial Intelligence, August 2017, pp.2053-2059.
[31]

Han L, Li X, Dong Y. Convolutional edge constraint-based U-Net for salient object detection. IEEE Access, 2019, 7: 48890-48900.

[32]

Liu Y, Cheng M M, Hu X, Bian J W, Zhang L, Bai X, Tang J. Richer convolutional features for edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41(8): 1939-1946.

[33]

Li X, Liu K, Dong Y. Superpixel-based foreground extraction with fast adaptive trimaps. IEEE Transactions on Cybernetics, 2018, 48(9): 2609-2619.

[34]

Xie S, Tu Z. Holistically-nested edge detection. International Journal of Computer Vision, 2017, 125(1-3): 3-18.

[35]
Shen W, Wang X, Wang Y, Bai X, Zhang Z. DeepContour: A deep convolutional feature learned by positive-sharing loss for contour detection. In Proc. the 2015 IEEE Conference on Computer Vision and Pattern Recognition, June 2015, pp.3982-3991.
[36]

Li X, Liu K, Dong Y, Tao D. Patch alignment manifold matting. IEEE Trans. Neural Network and Learning System, 2018, 29(7): 3214-3226.

Journal of Computer Science and Technology
Pages 1269-1278
Cite this article:
Wang X-G, Wang J-S, Tang P, et al. Weakly- and Semi-Supervised Fast Region-Based CNN for Object Detection. Journal of Computer Science and Technology, 2019, 34(6): 1269-1278. https://doi.org/10.1007/s11390-019-1975-z

340

Views

5

Crossref

N/A

Web of Science

6

Scopus

1

CSCD

Altmetrics

Received: 23 March 2019
Revised: 26 July 2019
Published: 22 November 2019
©2019 Springer Science + Business Media, LLC & Science Press, China
Return