Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
The field of Artificial Intelligence (AI) is witnessing a rapid evolution in the field of emotion quantification. New possibilities for understanding and parsing human emotions are emerging from advances in this technology. Multi-modal data sources, including facial expressions, speech, text, gestures, and physiological signals, are combined with machine learning and deep learning methods in modern emotion recognition systems. These systems achieve accurate recognition of emotional states in a wide range of complex environments. This paper provides a comprehensive overview of research advances in multi-modal emotion recognition techniques. This serves as a foundation for an in-depth discussion combining the field of AI with the quantification of emotion, a focus of attention in the field of psychology. It also explores the privacy and ethical issues faced during the processing and analysis of emotion data, and the implications of these challenges for future research directions. In conclusion, the objective of this paper is to adopt a forward-looking perspective on the development trajectory of AI in the field of emotion quantification, and also point out the potential value of emotion quantification research in a number of areas, including emotion quantification platforms and tools, computational psychology, and computational psychiatry.
R. A. Calvo and S. D’Mello, Affect detection: An interdisciplinary review of models, methods, and their applications, IEEE Trans. Affective Comput., vol. 1, no. 1, pp. 18–37, 2010.
S. Poria, E. Cambria, R. Bajpai, and A. Hussain, A review of affective computing: From unimodal analysis to multimodal fusion, Inf. Fusion, vol. 37, pp. 98–125, 2017.
J. J. Gross, Emotion regulation: Current status and future prospects, Psychol. Inq., vol. 26, no. 1, pp. 1–26, 2015.
G. M. Harari, N. D. Lane, R. Wang, B. S. Crosier, A. T. Campbell, and S. D. Gosling, Using smartphones to collect behavioral data in psychological science, Perspect. Psychol. Sci., vol. 11, no. 6, pp. 838–854, 2016.
Y. Lim and M. Lee, Implications of emotional coaching and integrated art therapy teaching method on leadership education in the AI era, J-Institute, vol. 5, no. 2, pp. 42–49, 2020.
P. P. Frank, M. X. E. Lu, and E. C. Sasse, Educational and emotional needs of patients with myelodysplastic syndromes: An AI analysis of multi-country social media, Adv. Ther., vol. 40, no. 1, pp. 159–173, 2023.
X. Jia, Research on the emotional impact of AI care robots on elderly living alone, J. Artif. Intell. Pract., vol. 6, no. 6, pp. 50–55, 2023.
A. Krizhevsky, I. Sutskever, and G. E. Hinton, ImageNet classification with deep convolutional neural networks, Commun. ACM, vol. 60, no. 6, pp. 84–90, 2017.
C. Liu, Q. Tian, and M. Chen, Distinguishing personality recognition and quantification of emotional features based on users' information behavior in social media, J. Database Manag., vol. 32, no. 2, pp. 76–91, 2021.
J. Luo, G. Zhang, Y. Su, Y. Lu, Y. Pang, Y. Wang, H. Wang, K. Cui, Y. Jiang, L. Zhong, et al., Quantitative analysis of heart rate variability parameter and mental stress index, Front. Cardiovasc. Med., vol. 9, p. 930745, 2022.
Q. Li, S. Zhan, L. Xu, and C. Wu, Facial micro-expression recognition based on the fusion of deep learning and enhanced optical flow, Multimed. Tools Appl., vol. 78, no. 20, pp. 29307–29322, 2019.
A. V. Savchenko and I. A. Makarov, Neural network model for video-based analysis of student’s emotions in E-learning, Opt. Mem. Neural Networks, vol. 31, no. 3, pp. 237–244, 2022.
F. Liu, H. Wang, J. Zhang, Z. Fu, A. Zhou, J. Qi, and Z. Li, EvoGAN: An evolutionary computation assisted GAN, Neurocomputing, vol. 469, pp. 81–90, 2022.
A. P. Fard and M. H. Mahoor, Ad-corre: Adaptive correlation-based loss for facial expression recognition in the wild, IEEE Access, vol. 10, pp. 26756–26768, 2022.
C. Zhang, M. Li, and D. Wu, Federated multidomain learning with graph ensemble autoencoder GMM for emotion recognition, IEEE Trans. Intell. Transport. Syst., vol. 24, no. 7, pp. 7631–7641, 2023.
S. Medjden, N. Ahmed, and M. Lataifeh, Adaptive user interface design and analysis using emotion recognition through facial expressions and body posture from an RGB-D sensor, PLoS One, vol. 15, no. 7, p. e0235908, 2020.
M. Prince, Real-time emotional expression generation by humanoid robot, Int. J. Adv. Comput. Sci. Appl., vol. 12, no. 12, pp. 381–385, 2021.
B. Zhou and X. Li, Multimodal emotion analysis model based on interactive attention mechanism, Front. Comput. Intell. Syst., vol. 3, no. 2, pp. 67–73, 2023.
M. J. Al-Dujaili and A. Ebrahimi-Moghadam, Speech emotion recognition: A comprehensive survey, Wirel. Pers. Commun., vol. 129, no. 4, pp. 2525–2561, 2023.
K. Bhangale and M. Kothandaraman, Speech emotion recognition based on multiple acoustic features and deep convolutional neural network, Electronics, vol. 12, no. 4, p. 839, 2023.
J. de Lope and M. Graña, An ongoing review of speech emotion recognition, Neurocomputing, vol. 528, pp. 1–11, 2023.
F. Daneshfar and M. B. Jamshidi, An octonion-based nonlinear echo state network for speech emotion recognition in Metaverse, Neural Netw., vol. 163, pp. 108–121, 2023.
A. Carvalho, A. Levitt, S. Levitt, E. Khaddam, and J. Benamati, Off-the-shelf artificial intelligence technologies for sentiment and emotion analysis: A tutorial on using IBM natural language processing, Commun. Assoc. Inf. Syst., vol. 44, pp. 918–943, 2019.
Q. Xu, F. Liu, Z. Fu, A. Zhou, and J. Qi, AeS-GCN: Attention-enhanced semantic-guided graph convolutional networks for skeleton-based action recognition, Comput. Animat. Virtual Worlds, vol. 33, nos.3&4, pp. e2070, 2022.
L. Chen, K. Wang, M. Li, M. Wu, W. Pedrycz, and K. Hirota, K-means clustering-based kernel canonical correlation analysis for multimodal emotion recognition in human–robot interaction, IEEE Trans. Ind. Electron., vol. 70, no. 1, pp. 1016–1024, 2023.
J. Zheng, S. Zhang, Z. Wang, X. Wang, and Z. Zeng, Multi-channel weight-sharing autoencoder based on cascade multi-head attention for multimodal emotion recognition, IEEE Trans. Multimedia, vol. 25, pp. 2213–2225, 2023.
N. Ahmed, Z. Al Aghbari, and S. Girija, A systematic survey on multimodal emotion recognition using learning algorithms, Intell. Syst. Appl., vol. 17, pp. 200171, 2023.
R. S. Aparicio Garcia, G. Juarez Gracia, J. A. Alvarez Cedillo, J. Sandoval Gutierrez, and B. Tovar Corona, Evaluation of the design of a brain-computer interface for emotion detection, Dyna, vol. 93, no. 1, pp. 468, 2018.
J. Pan, L. Wang, H. Huang, J. Xiao, F. Wang, Q. Liang, C. Xu, Y. Li, and Q. Xie, A hybrid brain–computer interface combining P300 potentials and emotion patterns for detecting awareness in patients with disorders of consciousness, IEEE Trans. Cogn. Dev. Syst., vol. 15, no. 3, pp. 1386–1395, 2023.
E. D. Floreani, S. Orlandi, and T. Chau, A pediatric near-infrared spectroscopy brain-computer interface based on the detection of emotional valence, Front. Hum. Neurosci., vol. 16, pp. 938708, 2022.
Y. Zhao, Wearable brain-computer interface technology and its application, Theor. Nat. Sci., vol. 15, no. 1, pp. 137–145, 2023.
M. Z. Baig and M. Kavakli, A survey on psycho-physiological analysis & measurement methods in multimodal systems, Multimodal Technol. Interact., vol. 3, no. 2, pp. 37, 2019.
M. A. Razzaq, J. Hussain, J. Bang, C. H. Hua, F. A. Satti, U. U. Rehman, H. S. M. Bilal, S. T. Kim, and S. Lee, A hybrid multimodal emotion recognition framework for UX evaluation using generalized mixture functions, Sensors, vol. 23, no. 9, p. 4373, 2023.
J. Singh, F. Ali, B. Shah, K. S. Bhangu, and D. Kwak, Emotion quantification using variational quantum state fidelity estimation, IEEE Access, vol. 10, pp. 115108–115119, 2022.
M. Sharma, I. Kandasamy, and W. B. Vasantha, Emotion quantification and classification using the neutrosophic approach to deep learning, Appl. Soft Comput., vol. 148, pp. 110896, 2023.
D. Castelvecchi, Can we open the black box of AI, Nature, vol. 538, no. 7623, pp. 20–23, 2016.
S. Mohamed, M. T. Png, and W. Isaac, Decolonial AI: Decolonial theory as sociotechnical foresight in artificial intelligence, Philos. Technol, vol. 33, no. 4, pp. 659–684, 2020.
A. Mehrabian, Analysis of the big-five personality factors in terms of the pad temperament model, Aust. J. Psychol., vol. 48, no. 2, pp. 86–92, 1996.
R. Plutchik and H. Van Praag, The measurement of suicidality, aggressivity and impulsivity, Prog. Neuro Psychopharmacol. Biol. Psychiatry, vol. 13, pp. S23–S34, 1989.
A. Mehrabian, Pleasure-arousal-dominance: A general framework for describing and measuring individual differences in Temperament, Curr. Psychol., vol. 14, no. 4, pp. 261–292, 1996.
A. Mehrabian, Relations among personality scales of aggression, violence, and empathy: Validational evidence bearing on the risk of eruptive violence scale, 3.0.CO;2-H">Aggress. Behav., vol. 23, no. 6, pp. 433–445, 1997.
A. Mehrabian, Beyond IQ: Broad-based measurement of individual success potential or “emotional intelligence”, Genet. Soc. Gen. Psychol. Monogr., vol. 126, no. 2, pp. 133–239, 2000.
F. Liu, H. Y. Wang, S. Y. Shen, X. Jia, J. Y. Hu, J. H. Zhang, X. Y. Wang, Y. Lei, A. M. Zhou, J. Y. Qi, et al., OPO-FCM: A computational affection based OCC-PAD-OCEAN federation cognitive modeling approach, IEEE Trans. Comput. Soc. Syst., vol. 10, no. 4, pp. 1813–1825, 2023.
C. Nardelli, From emotion regulation to emotion regulation flexibility, Nat. Rev. Psychol., vol. 2, no. 11, pp. 660–660, 2023.
A. Nair, R. B. Rutledge, and L. Mason, Under the hood: Using computational psychiatry to make psychological therapies more mechanism-focused, Front. Psychiatry, vol. 11, pp. 140, 2020.
B. Ribba, Reinforcement learning as an innovative model-based approach: Examples from precision dosing, digital health and computational psychiatry, Front. Pharmacol., vol. 13, pp. 1094281, 2023.
N. R. Ging-Jehli, H. C. Kraemer, L. Eugene Arnold, M. E. Roley-Roberts, and R. de Beus, Cognitive markers for efficacy of neurofeedback for attention-deficit hyperactivity disorder–personalized medicine using computational psychiatry in a randomized clinical trial, J. Clin. Exp. Neuropsychol., vol. 45, no. 2, pp. 118–131, 2023.
J. Lobban and D. Murphy, Military museum collections and art therapy as mental health resources for veterans with PTSD, Int. J. Art Ther., vol. 25, no. 4, pp. 172–182, 2020.
X. Gómez-Batiste, S. Mateu, S. Serra-Jofre, M. Molas, S. MiR-Roca, J. Amblàs, X. Costa, C. Lasmarías, M. Serrarols, A. Solà-Serrabou, et al., Compassionate communities: Design and preliminary results of the experience of Vic (Barcelona, Spain) caring city, Ann. Palliat. Med., vol. 7, no. S2, pp. S32–S41, 2018.
J. Wu and J. Xiao, Effectiveness of the neuroimaging techniques in the recognition of psychiatric disorders: A systematic review and meta-analysis of RCTs, Curr. Med. Imag. Rev., vol. 20, p. e260523217379, 2024.
W. Huang, Elderly depression recognition based on facial micro-expression extraction, Trait. du Signal, vol. 38, no. 4, pp. 1123–1130, 2021.
X. Li, R. La, Y. Wang, J. Niu, S. Zeng, S. Sun, and J. Zhu, EEG-based mild depression recognition using convolutional neural network, Med. Biol. Eng. Comput., vol. 57, no. 6, pp. 1341–1352, 2019.
X. Sun, Y. Xu, Y. Zhao, X. Zheng, Y. Zheng, and L. Cui, Multi-granularity graph convolution network for major depressive disorder recognition, IEEE Trans. Neural Syst. Rehabil. Eng., vol. 32, pp. 559–569, 2024.
The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).