AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (2.6 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

Evaluation Method of Motor Coordination Ability in Children Based on Machine Vision

Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
Department of Automation, Tsinghua University, Beijing 100084, China, and also with the Pharmacovigilance Research Center for information technology and Data Science, Cross-strait Tsinghua Research Institute, Xiamen 361000, China
Institute of Systems Engineering, Dalian University of Technology, Dalian 116024, China
Physical Education Department, Hebei Sports University, Shijiazhuang 050049, China
Department of Children’s Health Care Center, Beijing Children’s Hospital, Capital Medical University, National Center for Children’s Health, Beijing 100045, China
Show Author Information

Abstract

Motor coordination is crucial for preschoolers’ development and is a key factor in assessing childhood development. Current diagnostic methods often rely on subjective manual assessments. This paper presents a machine vision-based approach aimed at improving the objectivity and adaptability of assessments. The method proposed involves the extraction of key points from the human skeleton through the utilization of a lightweight pose estimation network, thereby transforming video assessments into evaluations of keypoint sequences. The study uses different methods to handle static and dynamic actions, including regularization and Dynamic Time Warping (DTW) for spatial alignment and temporal discrepancies. A penalty-adjusted single-frame pose similarity method is used to evaluate actions. The lightweight pose estimation model reduces parameters by 85%, uses only 6.6% of the original computational load, and has an average detection missing rate of less than 1%. The average error for static actions is 0.071 with a correlation coefficient of 0.766, and for dynamic actions it is 0.145 with a correlation coefficient of 0.653. These results confirm the proposed method’s effectiveness, which includes customized visual components like motion waveform graphs to improve accuracy in pediatric healthcare diagnoses.

References

[1]

R. Blank, A. L. Barnett, J. Cairney, D. Green, A. Kirby, H. Polatajko, S. Rosenblum, B. Smits-Engelsman, D. Sugden, P. Wilson et al., International clinical practice recommendations on the definition, diagnosis, assessment, intervention, and psychosocial aspects of developmental coordination disorder, Dev. Med. Child Neurol., vol. 61, no. 3, pp. 242–285, 2019.

[2]

R. Blank, B. Smits-Engelsman, H. Polatajko, and P. Wilson, European academy for childhood disability (EACD): Recommendations on the definition, diagnosis and intervention of developmental coordination disorder (long version), Dev. Med. Child Neurol., vol. 54, no. 1, pp. 54–93, 2012.

[3]

W. Cui, X. Dai, S. Lin, G. Gu, and J. Hua, Movement coordination ability among preschool children in Shanghai, Chin. J. Sch. Health., vol. 40, no. 1, pp. 20–22, 2019.

[4]

R. C. Barnhart, M. J. Davenport, S. B. Epps, and V. M. Nordquist, Developmental coordination disorder, Phys. Ther., vol. 83, no. 8, pp. 722–731, 2003.

[5]

T. Rihtman, B. N. Wilson, and S. Parush, Development of the Little Developmental Coordination Disorder Questionnaire for preschoolers and preliminary evidence of its psychometric properties in Israel, Res. Dev. Disabil., vol. 32, no. 4, pp. 1378–1387, 2011.

[6]

R. Li, H. Fu, Y. Zheng, W.-L. Lo, J. J. Yu, C. H. P. Sit, Z. Chi, Z. Song, and D. Wen, Automated fine motor evaluation for developmental coordination disorder, IEEE Trans. Neural Syst. Rehabil. Eng., vol. 27, no. 5, pp. 963–973, 2019.

[7]

T. Brown and A. Lalor, The movement assessment battery for children—Second edition (MABC-2): A review and critique, Phys. Occup. Ther. Pediatr., vol. 29, no. 1, pp. 86–103, 2009.

[8]

A. N. Mason, A. Hively, B. Itani, and W. Loop, Bruininks-oseretsky test of motor proficiency-II, Crit. Rev. Phys. Rehabil. Med., vol. 30, no. 2, pp. 93–95, 2018.

[9]

P. Sanjay and K. Nivedita, Assessment of Dharwad ruralnormal children on peabody developmental motor scales, (PDMS-2), Int. J. Res. Health. Scie., vol. 1, pp. 165–170, 2013.

[10]
D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri, Learning spatiotemporal features with 3D convolutional networks, in Proc. IEEE Int. Conf. Computer Vision (ICCV ). Santiago, Chile, 2015, pp. 4489–4497.
[11]
L. Sun, K. Jia, D.-Y. Yeung, and B. E. Shi, Human action recognition using factorized spatio-temporal convolutional networks, in Proc. IEEE Int. Conf. Computer Vision (ICCV ). Santiago, Chile, 2015, pp. 4597–4605.
[12]
S. Ji, W. Xu, M. Yang, and K. Yu, 3D convolutional neural networks for human action recognition, IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 1, pp. 221–231, 2013.
[13]

K. Simonyan and A. Zisserman, Two-stream convolutional networks for action recognition in videos, Adv. Neural Inf. Process. Syst., vol. 1, no. January, pp. 568–576, 2014.

[14]
C. Feichtenhofer, A. Pinz, and A. Zisserman, Convolutional two-stream network fusion for video action recognition, in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR ). Las Vegas, NV, USA, 2016, pp. 1933–1941.
[15]
L. Shi, Y. Zhang, J. Cheng, and H. Lu, Skeleton-based action recognition with directed graph neural networks, in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR ). Long Beach, CA, USA, 2019, pp. 7904–7913.
[16]

Z. Zhang, Microsoft kinect sensor and its effect, IEEE MultiMedia, vol. 19, no. 2, pp. 4–10, 2012.

[17]
M. Antunes, R. Baptista, G. Demisse, D. Aouada, and B. Ottersten, Visual and human-interpretable feedback for assisting physical activity. Lecture Notes in Computer Science. Cham: Springer International Publishing, 2016. pp. 115–129,
[18]

A. Elkholy, M. E. Hussein, W. Gomaa, D. Damen, and E. Saba, Efficient and robust skeleton-based quality assessment and abnormality detection in human action performance, IEEE J. Biomed. Health Inform., vol. 24, no. 1, pp. 280–291, 2020.

[19]

L. Chen, N. Ma, P. Wang, J. Li, P. Wang, G. Pang, and X. Shi, Survey of pedestrian action recognition techniques for autonomous driving, Tsinghua Science and Technology, vol. 25, no. 4, pp. 458–470, 2020.

[20]

Q. Dang, J. Yin, B. Wang, and W. Zheng, Deep learning based 2D human pose estimation: A survey, Tsinghua Science and Technology, vol. 24, no. 6, pp. 663–676, 2019.

[21]
R. Sanford, S. Gorji, L. G. Hafemann, B. Pourbabaee, and M. Javan, Group activity detection from trajectory and video data in soccer, in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition Workshops (CVPRW ). Seattle, WA, USA, 2020, pp. 3932–3940.
[22]
A. Malpani, S. S. Vedula, C. C. G. Chen, and G. D. Hager, Pairwise comparison-based objective score for automated skill assessment of segments in a surgical task. Lecture Notes in Computer Science. Cham: Springer International Publishing, 2014. pp. 138–147,
[23]
A. Zia, Y. Sharma, V. Bettadapura, E. L. Sarin, M. A. Clements, and I. Essa, Automated assessment of surgical skills using frequency analysis. Lecture Notes in Computer Science. Cham: Springer International Publishing, 2015. pp. 430–438,
[24]
J. Wang, K. Qiu, H. Peng, J. Fu, and J. Zhu, AI coach: Deep human pose estimation and analysis for personalized athletic training assistance, in Proc. 27th ACM Int. Conf. Multimedia. Nice, France, 2019, pp. 2228–2230.
[25]
P. Parmar and B. T. Morris, Learning to score Olympic events, in Proc. IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW ). Honolulu, HI, USA, 2017, pp. 76–84.
[26]
P. Parmar and B. Morris, Action quality assessment across multiple actions, in Proc. IEEE Winter Conf. Applications of Computer Vision (WACV ). Waikoloa, HI, USA, 2019, pp. 1468–1476.
[27]

Q. Zhang and B. Li, Relative hidden Markov models for video-based evaluation of motion skills in surgical training, IEEE Trans. Pattern Anal. Mach. Intell., vol. 37, no. 6, pp. 1206–1218, 2015.

[28]
H. Doughty, W. Mayol-Cuevas, and D. Damen, The pros and cons: Rank-aware temporal attention for skill determination in long videos, in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR ). Long Beach, CA, USA, 2019, pp. 7854–7863.
[29]

M. Feng and J. Meunier, Skeleton graph-neural-network-based human action recognition: A survey, Sensors, vol. 22, no. 6, p. 2091, 2022.

[30]
Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh, Realtime multi-person 2D pose estimation using part affinity fields, in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR ). Honolulu, HI, USA, 2017, pp. 1302–1310.
[31]

Z. Cao, G. Hidalgo, T. Simon, S.-E. Wei, and Y. Sheikh, OpenPose: realtime multi-person 2D pose estimation using part affinity fields, IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 1, pp. 172–186, 2021.

[32]
D. Osokin, Real-time 2D multi-person pose estimation on CPU: Lightweight OpenPose, in Proc. 8th Int. Conf. Pattern Recognition Applications and Methods. Prague, Czech Republic. SCITEPRESS - Science and Technology Publications, 2019. pp. 744–748.
[33]

S.-M. Lim and S.-W. Jun, MobileNets can be lossily compressed: Neural network compression for embedded accelerators, Electronics, vol. 11, no. 6, p. 858, 2022.

[34]

T.-Y. Lin and C. L. Zitnick, Microsoft COCO: Common objects in context, Lect. Notes Comput. Sci., vol. 8693LNCS, no. PART5, pp. 740–755, 2014.

[35]
P. Parmar and B. Morris, Action quality assessment across multiple actions, in Proc. IEEE Winter Conf. Applications of Computer Vision (WACV ). Waikoloa, HI, USA, 2019, pp. 1468–1476.
[36]
Y. Tang, Z. Ni, J. Zhou, D. Zhang, J. Lu, Y. Wu, and J. Zhou, Uncertainty-aware score distribution learning for action quality assessment, in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR ). Seattle, WA, USA, 2020, pp. 9836–9845.
[37]

R. Li, H. Fu, Y. Zheng, W.-L. Lo, J. J. Yu, C. H. P. Sit, Z. Chi, Z. Song, and D. Wen, Automated fine motor evaluation for developmental coordination disorder, IEEE Trans. Neural Syst. Rehabil. Eng., vol. 27, no. 5, pp. 963–973, 2019.

[38]

R. Li, H. Fu, Y. Zheng, S. Gou, J. J. Yu, X. Kong, and H. Wang, Behavior analysis with integrated visual-motor tracking for developmental coordination disorder, IEEE Trans. Neural Syst. Rehabil. Eng., vol. 31, pp. 2164–2173, 2023.

[39]
J. Chen, J. Wang, Q. Yuan, and Z. Yang, CNN-LSTM model for recognizing video-recorded actions performed in a traditional Chinese exercise, IEEE J. Transl. Eng. Health Med., vol. 11, pp. 351–359, 1039.
Tsinghua Science and Technology
Pages 633-649
Cite this article:
Lei Y, Shu D, Yu M, et al. Evaluation Method of Motor Coordination Ability in Children Based on Machine Vision. Tsinghua Science and Technology, 2025, 30(2): 633-649. https://doi.org/10.26599/TST.2024.9010069

26

Views

1

Downloads

0

Crossref

0

Web of Science

0

Scopus

0

CSCD

Altmetrics

Received: 29 November 2023
Revised: 21 March 2024
Accepted: 02 April 2024
Published: 09 December 2024
© The Author(s) 2025.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return