AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
View PDF
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

Human Action Recognition Using Difference of Gaussian and Difference of Wavelet

Department of Computer Science and Engineering (AI&ML), CMR Technical Campus, Hyderabad 501401, India.
Department of Information Technology, Kakatiya Institute of Technology and Science, Warangal 506015, India.
Department of Electronics and Communication Engineering, Malla Reddy Engineering College for Women (Autonomous), Hyderabad 500100, India.
Department of Computer Science and Engineering, S. A. Engineering College (Autonomous), Thiruverkadu 600077, India.
Department of Electronics and Communication Engineering, Saveetha Institute of Medical and Technical Sciences (SIMATS), Chennai 602105, India.
Department of Electronics and Communication Engineering, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai 600062, India.
IDMS Team, STI Laboratory, Faculty of Sciences and Techniques, Moulay Ismail University of Meknès, Errachidia 25003, Morocco.
Show Author Information

Abstract

Human Action Recognition (HAR) attempts to recognize the human action from images and videos. The major challenge in HAR is the design of an action descriptor that makes the HAR system robust for different environments. A novel action descriptor is proposed in this study, based on two independent spatial and spectral filters. The proposed descriptor uses a Difference of Gaussian (DoG) filter to extract scale-invariant features and a Difference of Wavelet (DoW) filter to extract spectral information. To create a composite feature vector for a particular test action picture, the Discriminant of Guassian (DoG) and Difference of Wavelet (DoW) features are combined. Linear Discriminant Analysis (LDA), a widely used dimensionality reduction technique, is also used to eliminate duplicate data. Finally, a closest neighbor method is used to classify the dataset. Weizmann and UCF 11 datasets were used to run extensive simulations of the suggested strategy, and the accuracy assessed after the simulations were run on Weizmann datasets for five-fold cross validation is shown to perform well. The average accuracy of DoG + DoW is observed as 83.6635% while the average accuracy of Discrinanat of Guassian (DoG) and Difference of Wavelet (DoW) is observed as 80.2312% and 77.4215%, respectively. The average accuracy measured after the simulation of proposed methods over UCF 11 action dataset for five-fold cross validation DoG + DoW is observed as 62.5231% while the average accuracy of Difference of Guassian (DoG) and Difference of Wavelet (DoW) is observed as 60.3214% and 58.1247%, respectively. From the above accuracy observations, the accuracy of Weizmann is high compared to the accuracy of UCF 11, hence verifying the effectiveness in the improvisation of recognition accuracy.

References

[1]
R. J. R. Kumar, M. Sundaram, N. Arumugam, and V. Kavitha, Face feature extraction for emotion recognition using statistical parameters from subband selective multilevel stationary biorthogonal wavelet transform, Soft Computing, vol. 25, no. 7, pp. 5483–5501, 2021.
[2]
T. Ko, A survey on behavior analysis in video surveillance for homeland security applications, in Proc. 37th IEEE Applied Imagery Pattern Recognition Workshop, Washington, DC, USA, 2008, pp. 1–8.
[3]
M. Al-Faris, J. Chiverton, D. Ndzi, and A. I. Ahmed, A review on computer vision-based methods for human action recognition, Journal of Imaging, vol. 6, no. 6, p. 46, 2020.
[4]
V. Tripathi, D. Gangodkar, A. Mittal, and V. Kanth, Robust action recognition framework using segmented block and distance mean histogram of gradients approach, Procedia Computer Science, vol. 115, pp. 493–500, 2017.
[5]
M. Khare and J. Moongu, Towards discrete wavelet transform-based human activity recognition, in Proc. SPIE 10443, Second International Workshop on Pattern Recognition, Singapore, 2017, p. 1044308.
[6]
D. K. Vishwakarma, P. Rawat, and R. Kapoor, Human activity recognition using Gabor wavelet transform and ridgelet transform, Procedia Computer Science, vol. 57, pp. 630–636, 2015.
[7]
H. A. Moghaddam and A. Zare, Spatiotemporal wavelet correlogram for human action recognition, International Journal of Multimedia Information Retrieval, vol. 8, no. 3, pp. 167–180, 2019.
[8]
Y. Shen and Z. Miao, Oriented gradients for human action recognition, in Proc. Second International Conference on Internet Multimedia Computing and Service (ICIMCS), Harbin, China, 2010, pp. 175–178.
[9]
H. Yu and J. Yang, A direct LDA algorithm for high-dimensional data—With application to face recognition, Pattern Recognition, vol. 34, no. 10, pp. 2067–2070, 2001.
[10]
Y. Su, Y. Li, and A. Liu, Open-view human action recognition based on linear discriminant analysis, Multimedia Tools and Applications, vol. 78, no. 1, pp. 767–782, 2019.
[11]
M. Z. Uddin, J. J. Lee, and T. S. Kim, Independent component feature-based human activity recognition via linear discriminant analysis and hidden Markov model, in Proc. 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Vancouver, Canada, 2008, pp. 5168–5171.
[12]
B. Mandal and H. -L. Eng, Regularized discriminant analysis for holistic human activity recognition, IEEE Intelligent Systems, vol. 27, no. 1, pp. 21–31, 2012.
[13]
M. Guo and Z. Wang, A feature extraction method for human action recognition using body-worn inertial sensors, in Proc. 2015 IEEE 19th International Conference on Computer Supported Cooperative Work in Design (CSCWD), Calabria, Italy, 2015, pp. 576–581.
[14]
Y. Yan, G. Liu, E. Ricci, and N. Sebe, Multi-task linear discriminant analysis for multi-view action recognition, in Proc. 2013 IEEE International Conference on Image Processing, Melbourne, Australia, 2013, pp. 2842–2846.
[15]
M. Blank, L. Gorelick, E. Shechtman, M. Irani, and R. Basri, Actions as space-time shapes, in Proc. Tenth IEEE International Conference on Computer Vision (ICCV’05), Beijing, China, 2005, pp. 1395–1402.
[16]
J. Liu, J. Luo, and M. Shah, Recognizing realistic actions from videos “in the wild”, in Proc. 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Miami, FL, USA, 2009, pp. 1996–2003.
Big Data Mining and Analytics
Pages 336-346
Cite this article:
Reddy GV, Deepika K, Malliga L, et al. Human Action Recognition Using Difference of Gaussian and Difference of Wavelet. Big Data Mining and Analytics, 2023, 6(3): 336-346. https://doi.org/10.26599/BDMA.2022.9020040

638

Views

63

Downloads

2

Crossref

2

Web of Science

6

Scopus

0

CSCD

Altmetrics

Received: 22 September 2022
Revised: 12 October 2022
Accepted: 17 October 2022
Published: 07 April 2023
© The author(s) 2023.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return