[1]
X. Chu, Z. Tian, Y. Wang, B. Zhang, H. Ren, X. Wei, H. Xia, and C. Shen, Twins: Revisiting the design of spatial attention in vision transformers, arXiv preprint arXiv: 2104.13840, 2021.
[5]
Z. Liu, H. Hu, Y. Lin, Z. Yao, Z. Xie, Y. Wei, J. Ning, Y. Cao, Z. Zhang, L. Dong, et al., Swin transformer V2: Scaling up capacity and resolution, in Proc. 2022 IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 2022, pp. 11999–12009.
[13]
J. Devlin, M. W. Chang, K. Lee, and K. Toutanova, BERT: Pre-training of deep bidirectional transformers for language understanding, arXiv preprint arXiv: 1810.04805, 2018.
[16]
Y. Yang, X. Yang, M. Heidari, M. A. Khan, G. Srivastava, M. Khosravi, and L. Qi, ASTREAM: Data-stream-driven scalable anomaly detection with accuracy guarantee in IIoT environment, IEEE Trans. Netw. Sci. Eng. doi: 10.1109/TNSE.2022.3157730.
[19]
Z. Cai and Z. He, Trading private range counting over big IoT data, in Proc. 2019 IEEE 39th Int. Conf. Distributed Computing Systems (ICDCS), Dallas, TX, USA, 2019, pp. 144–153.
[20]
L. Kong, G. Li, W. Rafique, S. Shen, Q. He, M. R. Khosravi, R. Wang, and L. Qi, Time-aware missing healthcare data prediction based on ARIMA model, IEEE/ACM Trans. Comput. Biol. Bioinform. doi: 10.1109/TCBB.2022.3205064.
[26]
K. Pei, Y. Cao, J. Yang, and S. Jana, DeepXplore: Automated whitebox testing of deep learning systems, in Proc. 26th Symp. on Operating Systems Principles, Shanghai, China, 2017, pp. 1–18.
[27]
L. Ma, F. Juefei-Xu, F. Zhang, J. Sun, M. Xue, B. Li, C. Chen, T. Su, L. Li, Y. Liu, et al., DeepGauge: Multi-granularity testing criteria for deep learning systems, in Proc. 2018 33rd IEEE/ACM Int. Conf. Automated Software Engineering (ASE), Montpellier, France, 2018, pp. 120–131.
[31]
H. Zhang, H. Chen, C. Xiao, S. Gowal, R. Stanforth, B. Li, D. Boning, and C. J. Hsieh, Towards stable and efficient training of verifiably robust neural networks, arXiv preprint arXiv: 1906.06316, 2019.
[32]
R. S. Pressman, Software Engineering : A Practitioner’s Approach. London, UK: Palgrave Macmillan, 2005.
[34]
Y. Tian, K. Pei, S. Jana, and B. Ray, DeepTest: Automated testing of deep-neural-network-driven autonomous cars, in Proc. 2018 IEEE/ACM 40th Int. Conf. Software Engineering (ICSE), Gothenburg, Sweden, 2018, pp. 303–314.
[35]
A. Odena and I. Goodfellow, TensorFuzz: Debugging neural networks with coverage-guided fuzzing, arXiv preprint arXiv: 1807.10875, 2018.
[36]
L. Ma, F. Juefei-Xu, M. Xue, B. Li, L. Li, Y. Liu, and J. Zhao, DeepCT: Tomographic combinatorial testing for deep learning systems, in Proc. 2019 IEEE 26th Int. Conf. Software Analysis, Evolution and Reengineering (SANER), Hangzhou, China, 2019, pp. 614–618.
[37]
Y. Tian, Z. Zhong, V. Ordonez, and B. Ray, Testing deep neural network based image classifiers, arXiv preprint arXiv: 1905.07831, 2019.
[38]
D. Wang, Z. Wang, C. Fang, Y. Chen, and Z. Chen, DeepPath: Path-driven testing criteria for deep neural networks, in Proc. 2019 IEEE Int. Conf. Artificial Intelligence Testing (AITest), Newark, CA, USA, 2019, pp. 119–120.
[39]
Z. Zhou, W. Dou, J. Liu, C. Zhang, J. Wei, and D. Ye, DeepCon: Contribution coverage testing for deep learning systems, in Proc. 2021 IEEE Int. Conf. Software Analysis, Evolution and Reengineering (SANER), Honolulu, HI, USA, 2021, pp. 189–200.
[40]
X. Xie, L. Ma, F. Juefei-Xu, M. Xue, H. Chen, Y. Liu, J. Zhao, B. Li, J. Yin, and S. See, DeepHunter: A coverage-guided fuzz testing framework for deep neural networks, in Proc. 28th ACM SIGSOFT Int. Symp. on Software Testing and Analysis, Beijing, China, 2019, pp. 146–157.
[41]
G. Katz, C. Barrett, D. Dill, K. Julian, and M. Kochenderfer, Reluplex: An efficient SMT solver for verifying deep neural networks, arXiv preprint arXiv: 1702.01135, 2017.
[42]
X. Huang, M. Kwiatkowska, S. Wang, and M. Wu, Safety verification of deep neural networks, in Computer Aided Verification, R. Majumdar and V. Kunčak, eds. Cham, Switzerland: Springer, 2017, pp. 3–29.
[43]
R. Ehlers, Formal verification of piece-wise linear feed-forward neural networks, in Automated Technology for Verification and Analysis, D. D’Souza and K. N. Kumar, eds. Cham, Switzerland: Springer, 2017, pp. 269–286.
[44]
G. Singh, T. Gehr, M. Mirman, M. Püschel, and M. Vechev, Fast and effective robustness certification, in Proc. 32nd Int. Conf. Neural Information Processing Systems, Montréal, Canada, 2018, pp. 10825–10836.
[45]
G. Singh, T. Gehr, M. Püschel, and M. Vechev, Boosting robustness certification of neural networks, https://openreview.net/forum?id=HJgeEh09KQ, 2018.
[46]
R. Bunel, I. Turkaslan, P. H. S. Torr, P. Kohli, and M. P. Kumar, A unified view of piecewise linear neural network verification, in Proc. 32nd Int. Conf. Neural Information Processing Systems, Montréal, Canada, 2018, pp. 4795–4804.
[47]
C. Müller, F. Serre, G. Singh, M. Püschel, and M. Vechev, Scaling polyhedral neural network verification on GPUs, arXiv preprint arXiv: 2007.10868, 2020.
[49]
H. Zhang, T. W. Weng, P. Y. Chen, C. J. Hsieh, and L. Daniel, Efficient neural network robustness certification with general activation functions, in Proc. 32nd Int. Conf. Neural Information Processing Systems, Montréal, Canada, 2018, pp. 4944–4953.
[51]
J. Gu, Y. Yang, and V. Tresp, Understanding individual decisions of CNNs via contrastive backpropagation, in Computer Vision – ACCV 2018, C. V. Jawahar, H. Li, G. Mori, and K. Schindler, eds. Cham, Switzerland: Springer, 2018, pp. 119–134.
[52]
B. K. Iwana, R. Kuroki, and S. Uchida, Explaining convolutional neural networks using softmax gradient layer-wise relevance propagation, in Proc. 2019 IEEE/CVF Int. Conf. Computer Vision Workshop (ICCVW), Seoul, Republic of Korea, 2019, pp. 4176–4185.
[53]
K. Xu, H. Zhang, S. Wang, Y. Wang, S. Jana, X. Lin, and C. J. Hsieh, Fast and complete: Enabling complete neural network verification with rapid and massively parallel incomplete verifiers, arXiv preprint arXiv: 2011.13824, 2020.
[54]
S. Wang, H. Zhang, K. Xu, X. Lin, S. Jana, C. J. Hsieh, and J. Z. Kolter, Beta-CROWN: Efficient bound propagation with per-neuron split constraints for complete and incomplete neural network robustness verification, arXiv preprint arXiv: 2103.06624, 2021.
[55]
S. Gowal, K. Dvijotham, R. Stanforth, R. Bunel, C. Qin, J. Uesato, R. Arandjelovic, T. Mann, and P. Kohli, On the effectiveness of interval bound propagation for training verifiably robust models, arXiv preprint arXiv: 1810.12715, 2018.
[56]
K. He, X. Zhang, S. Ren, and J. Sun, Deep residual learning for image recognition, in Proc. 2016 IEEE Conf. Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 2016, pp. 770–778.
[57]
G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, Densely connected convolutional networks, in Proc. 2017 IEEE Conf. Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017, pp. 2261–2269.
[58]
Y. LeCun and C. Cortes, The MNIST database of handwritten digits, http://yann.lecun.com/exdb/mnist/, 1998.
[59]
N. Krizhevsky, H. Vinod, C. Geoffrey, M. Papadakis, and A. Ventresque, The CIFAR-10 dataset, http://www.cs.toronto.edu/kriz/cifar.html, 2014.
[60]
J. Ren, M. Li, Z. Liu, and Q. Zhang, Interpreting and disentangling feature components of various complexity from DNNs, arXiv preprint arXiv: 2006.15920, 2020.