AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (516.8 KB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Article | Open Access

Artificial Intelligence in Emotion Quantification : A Prospective Overview

School of Computer Science and Technology, East China Normal University, Shanghai 200062, China
Show Author Information

Abstract

The field of Artificial Intelligence (AI) is witnessing a rapid evolution in the field of emotion quantification. New possibilities for understanding and parsing human emotions are emerging from advances in this technology. Multi-modal data sources, including facial expressions, speech, text, gestures, and physiological signals, are combined with machine learning and deep learning methods in modern emotion recognition systems. These systems achieve accurate recognition of emotional states in a wide range of complex environments. This paper provides a comprehensive overview of research advances in multi-modal emotion recognition techniques. This serves as a foundation for an in-depth discussion combining the field of AI with the quantification of emotion, a focus of attention in the field of psychology. It also explores the privacy and ethical issues faced during the processing and analysis of emotion data, and the implications of these challenges for future research directions. In conclusion, the objective of this paper is to adopt a forward-looking perspective on the development trajectory of AI in the field of emotion quantification, and also point out the potential value of emotion quantification research in a number of areas, including emotion quantification platforms and tools, computational psychology, and computational psychiatry.

References

[1]
R. W. Picard, Affective Computing, Cambridge, MA, USA: MIT Press, 2000.
[2]

R. A. Calvo and S. D’Mello, Affect detection: An interdisciplinary review of models, methods, and their applications, IEEE Trans. Affective Comput., vol. 1, no. 1, pp. 18–37, 2010.

[3]

S. Poria, E. Cambria, R. Bajpai, and A. Hussain, A review of affective computing: From unimodal analysis to multimodal fusion, Inf. Fusion, vol. 37, pp. 98–125, 2017.

[4]

J. J. Gross, Emotion regulation: Current status and future prospects, Psychol. Inq., vol. 26, no. 1, pp. 1–26, 2015.

[5]

G. M. Harari, N. D. Lane, R. Wang, B. S. Crosier, A. T. Campbell, and S. D. Gosling, Using smartphones to collect behavioral data in psychological science, Perspect. Psychol. Sci., vol. 11, no. 6, pp. 838–854, 2016.

[6]
S. George, From sex and therapy bots to virtual assistants and tutors: How emotional should artificially intelligent agents be? in Proc. 1st Int. Conf. Conversational User Interfaces, Dublin, Ireland, 2019, pp. 1–3.
[7]

Y. Lim and M. Lee, Implications of emotional coaching and integrated art therapy teaching method on leadership education in the AI era, J-Institute, vol. 5, no. 2, pp. 42–49, 2020.

[8]

P. P. Frank, M. X. E. Lu, and E. C. Sasse, Educational and emotional needs of patients with myelodysplastic syndromes: An AI analysis of multi-country social media, Adv. Ther., vol. 40, no. 1, pp. 159–173, 2023.

[9]
William Cho, https://globaljournals.org/GJMR_Volume22/E-Journal_GJMR_(A)_Vol_22_Issue_3.pdf, 2022.
[10]

X. Jia, Research on the emotional impact of AI care robots on elderly living alone, J. Artif. Intell. Pract., vol. 6, no. 6, pp. 50–55, 2023.

[11]

A. Krizhevsky, I. Sutskever, and G. E. Hinton, ImageNet classification with deep convolutional neural networks, Commun. ACM, vol. 60, no. 6, pp. 84–90, 2017.

[12]

C. Liu, Q. Tian, and M. Chen, Distinguishing personality recognition and quantification of emotional features based on users' information behavior in social media, J. Database Manag., vol. 32, no. 2, pp. 76–91, 2021.

[13]

J. Luo, G. Zhang, Y. Su, Y. Lu, Y. Pang, Y. Wang, H. Wang, K. Cui, Y. Jiang, L. Zhong, et al., Quantitative analysis of heart rate variability parameter and mental stress index, Front. Cardiovasc. Med., vol. 9, p. 930745, 2022.

[14]
E. M. Polo, M. Mollura, M. Lenatti, M. Zanet, A. Paglialonga, and R. Barbieri, Emotion recognition from multimodal physiological measurements based on an interpretable feature selection method, in Proc. 43rd Annu. Int. Conf. IEEE Engineering in Medicine & Biology Society (EMBC), virtual, 2021, pp. 989–992.
[15]
H. Prossinger, T. Hladky, J. Binter, S. Boschetti, and D. Riha, Visual analysis of emotions using AI image-processing software: Possible male/female differences between the emotion pairs “neutral”–“fear” and “pleasure”–“pain”, in Proc. 14th PErvasive Technologies Related to Assistive Environments Conf. (PETRA '21), Corfu, Greece, 2021, pp. 342–346.
[16]

Q. Li, S. Zhan, L. Xu, and C. Wu, Facial micro-expression recognition based on the fusion of deep learning and enhanced optical flow, Multimed. Tools Appl., vol. 78, no. 20, pp. 29307–29322, 2019.

[17]
P. Ekman and E. L. Rosenberg, What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS), Oxford, UK: Oxford University Press, 2005.
[18]
J. Zhang, F. Liu, and A. Zhou, Off-TANet: A lightweight neural micro-expression recognizer with optical flow features and integrated attention mechanism, in Proc. 18th Pacific Rim Int. Conf. Artificial Intelligence (PRICAI 2021), Hanoi, Vietnam, 2021, pp. 266–279.
[19]
H. Wang, B. Li, S. Wu, S. Shen, F. Liu, S. Ding, and A. Zhou, Rethinking the learning paradigm for dynamic facial expression recognition, in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), Vancouver, Canada, 2023, pp. 17958–17968.
[20]
F. Ma, B. Sun, and S. Li, Logo-former: Local-global spatio-temporal transformer for dynamic facial expression recognition, in Proc. 2023 IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 2023, pp. 1–5.
[21]
L. Sun, Z. Lian, B. Liu, and J. Tao, MAE-DFER: Efficient masked autoencoder for self-supervised dynamic facial expression recognition, in Proc. 31st ACM Int. Conf. Multimedia, Ottawa, Canada, 2023, pp. 6110–6121.
[22]

A. V. Savchenko and I. A. Makarov, Neural network model for video-based analysis of student’s emotions in E-learning, Opt. Mem. Neural Networks, vol. 31, no. 3, pp. 237–244, 2022.

[23]

F. Liu, H. Wang, J. Zhang, Z. Fu, A. Zhou, J. Qi, and Z. Li, EvoGAN: An evolutionary computation assisted GAN, Neurocomputing, vol. 469, pp. 81–90, 2022.

[24]
K. Chen, Z. Zhang, W. Zeng, R. Zhang, F. Zhu, and R. Zhao, Shikra: Unleashing multimodal llm’s referential dialogue magic, arXiv preprint arXiv: 2306.15195, 2023.
[25]
K. Wang, X. Peng, J. Yang, S. Lu, and Y. Qiao, Suppressing uncertainties for large-scale facial expression recognition, in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 2020, pp. 6896–6905.
[26]

A. P. Fard and M. H. Mahoor, Ad-corre: Adaptive correlation-based loss for facial expression recognition in the wild, IEEE Access, vol. 10, pp. 26756–26768, 2022.

[27]

C. Zhang, M. Li, and D. Wu, Federated multidomain learning with graph ensemble autoencoder GMM for emotion recognition, IEEE Trans. Intell. Transport. Syst., vol. 24, no. 7, pp. 7631–7641, 2023.

[28]

S. Medjden, N. Ahmed, and M. Lataifeh, Adaptive user interface design and analysis using emotion recognition through facial expressions and body posture from an RGB-D sensor, PLoS One, vol. 15, no. 7, p. e0235908, 2020.

[29]
M. Gopika Sri, G. Karthiga, K. Jayakarthika, N. Ilakkiya, V. Lavanyagayathri, and D. Uma Mageswari S, A fuzzy logic and NLP approach to emotion driven response generation for voice interaction, in Proc. Int. Conf. Research Methodologies in Knowledge Management, Artificial Intelligence and Telecommunication Engineering (RMKMATE), Chennai, India, 2023, pp. 1–5.
[30]
K. Zhou, B. Sisman, C. Busso, and H. Li, Mixed emotion modeling for emotional voice conversion, arXiv preprint arXiv: 2210.13756, 2022.
[31]

M. Prince, Real-time emotional expression generation by humanoid robot, Int. J. Adv. Comput. Sci. Appl., vol. 12, no. 12, pp. 381–385, 2021.

[32]
E. Conti, D. Salvi, C. Borrelli, B. Hosler, P. Bestagini, F. Antonacci, A. Sarti, M. C. Stamm, and S. Tubaro, Deepfake speech detection through emotion recognition: A semantic approach, in Proc. 2022 IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP), Singapore, 2022, pp. 8962–8966.
[33]
A. Adigwe, N. Tits, K. El Haddad, S. Ostadabbas, and T. Dutoit, The emotional voices database: Towards controlling the emotion dimension in voice generation systems, arXiv preprint arXiv: 1806.09514, 2018.
[34]

B. Zhou and X. Li, Multimodal emotion analysis model based on interactive attention mechanism, Front. Comput. Intell. Syst., vol. 3, no. 2, pp. 67–73, 2023.

[35]
L. Ahmed, I. K. Polok, M. A. Islam, M. Akhtaruzzaman, M. S. H. Mukta, and M. M. Rahman, Context based emotion recognition from Bengali text using transformers, in Proc. 5th Int. Conf. Smart Systems and Inventive Technology (ICSSIT), Tirunelveli, India, 2023, pp. 1478–1484.
[36]
M. A. Mahima, N. C. Patel, S. Ravichandran, N. Aishwarya, and S. Maradithaya, A text-based hybrid approach for multiple emotion detection using contextual and semantic analysis, in Proc. Int. Conf. Innovative Computing, Intelligent Communication and Smart Electrical Systems (ICSES), Chennai, India, 2021, pp. 1–6.
[37]
K. Hemakirthiga and J. Arunadevi, Improving emotion detection in text: A comparative analysis of machine learning algorithms and genetic algorithm-optimized logistic regression, in Proc. Int. Conf. Data Science, Agents & Artificial Intelligence (ICDSAAI), Chennai, India, 2023, pp. 1–6.
[38]

M. J. Al-Dujaili and A. Ebrahimi-Moghadam, Speech emotion recognition: A comprehensive survey, Wirel. Pers. Commun., vol. 129, no. 4, pp. 2525–2561, 2023.

[39]

K. Bhangale and M. Kothandaraman, Speech emotion recognition based on multiple acoustic features and deep convolutional neural network, Electronics, vol. 12, no. 4, p. 839, 2023.

[40]
W. Ahmed, S. Riaz, K. Iftikhar, and S. Konur, Speech emotion recognition using deep learning, in Proc. 43rd SGAI Int. Conf. Artificial Intelligence, Cambridge, UK, 2023, pp. 191–197.
[41]

J. de Lope and M. Graña, An ongoing review of speech emotion recognition, Neurocomputing, vol. 528, pp. 1–11, 2023.

[42]

F. Daneshfar and M. B. Jamshidi, An octonion-based nonlinear echo state network for speech emotion recognition in Metaverse, Neural Netw., vol. 163, pp. 108–121, 2023.

[43]

A. Carvalho, A. Levitt, S. Levitt, E. Khaddam, and J. Benamati, Off-the-shelf artificial intelligence technologies for sentiment and emotion analysis: A tutorial on using IBM natural language processing, Commun. Assoc. Inf. Syst., vol. 44, pp. 918–943, 2019.

[44]
X. Zhang, The application of natural language processing technology based on deep learning in Japanese sentiment analysis, in Proc. Int. Conf. Ambient Intelligence, Knowledge Informatics and Industrial Electronics (AIKIIE), Ballari, India, 2023, pp. 1–5.
[45]
C. M. A. Ilyas, R. Nunes, K. Nasrollahi, M. Rehm, and T. B. Moeslund, Deep emotion recognition through upper body movements and facial expression, in Proc. 16th Int. Joint Conf. Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2021), virtual, 2021, pp. 669–679.
[46]
Z. Fu, F. Liu, J. Zhang, H. Wang, C. Yang, Q. Xu, J. Qi, X. Fu, and A. Zhou, SAGN: Semantic adaptive graph network for skeleton-based human action recognition, in Proc. 2021 Int. Conf. Multimedia Retrieval, Taipei, China, 2021, pp. 110–117.
[47]

Q. Xu, F. Liu, Z. Fu, A. Zhou, and J. Qi, AeS-GCN: Attention-enhanced semantic-guided graph convolutional networks for skeleton-based action recognition, Comput. Animat. Virtual Worlds, vol. 33, nos.3&4, pp. e2070, 2022.

[48]
S. Sun, X. Xiong, and Y. Zheng, Two stage multi-modal modeling for video interaction analysis in deep video understanding challenge, in Proc. 30th ACM Int. Conf. Multimedia, Lisboa, Portugal, 2022, pp. 7040–7044.
[49]
Z. Lian, H. Sun, L. Sun, H. Gu, Z. Wen, S. Zhang, S. Chen, M. Xu, K. Xu, K. Chen, et al., Explainable multimodal emotion reasoning: a promising way to open-set emotion recognition, arXiv preprint arXiv: 2306.15401, 2023.
[50]

L. Chen, K. Wang, M. Li, M. Wu, W. Pedrycz, and K. Hirota, K-means clustering-based kernel canonical correlation analysis for multimodal emotion recognition in human–robot interaction, IEEE Trans. Ind. Electron., vol. 70, no. 1, pp. 1016–1024, 2023.

[51]

J. Zheng, S. Zhang, Z. Wang, X. Wang, and Z. Zeng, Multi-channel weight-sharing autoencoder based on cascade multi-head attention for multimodal emotion recognition, IEEE Trans. Multimedia, vol. 25, pp. 2213–2225, 2023.

[52]
D. Yang, S. Huang, H. Kuang, Y. Du, and L. Zhang, Disentangled representation learning for multimodal emotion recognition, in Proc. 30th ACM Int. Conf. Multimedia, Lisboa, Portugal, 2022, pp. 1642–1651.
[53]
D. Sun, Y. He, and J. Han, Using auxiliary tasks in multimodal fusion of Wav2vec 2.0 and bert for multimodal emotion recognition, in Proc. 2023 IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 2023, pp. 1–5.
[54]

N. Ahmed, Z. Al Aghbari, and S. Girija, A systematic survey on multimodal emotion recognition using learning algorithms, Intell. Syst. Appl., vol. 17, pp. 200171, 2023.

[55]

R. S. Aparicio Garcia, G. Juarez Gracia, J. A. Alvarez Cedillo, J. Sandoval Gutierrez, and B. Tovar Corona, Evaluation of the design of a brain-computer interface for emotion detection, Dyna, vol. 93, no. 1, pp. 468, 2018.

[56]

J. Pan, L. Wang, H. Huang, J. Xiao, F. Wang, Q. Liang, C. Xu, Y. Li, and Q. Xie, A hybrid brain–computer interface combining P300 potentials and emotion patterns for detecting awareness in patients with disorders of consciousness, IEEE Trans. Cogn. Dev. Syst., vol. 15, no. 3, pp. 1386–1395, 2023.

[57]

E. D. Floreani, S. Orlandi, and T. Chau, A pediatric near-infrared spectroscopy brain-computer interface based on the detection of emotional valence, Front. Hum. Neurosci., vol. 16, pp. 938708, 2022.

[58]

Y. Zhao, Wearable brain-computer interface technology and its application, Theor. Nat. Sci., vol. 15, no. 1, pp. 137–145, 2023.

[59]
W. Liu, H. Jiang, and Y. Lu, Research on multimodal emotion recognition platform construction, in Proc. Information Science and Cloud Computing (ISCC 2017), Guangzhou, China, 2018, pp. 1–9.
[60]
Y. Lee, N. Lee, V. Pham, J. Lee, and T. M. Chung, Privacy preserving stress detection system using physiological data fromWearable device, in Proc. 6th Int. Conf. Intelligent Human Systems Integration (IHSI 2023): Integrating People and Intelligent Systems, Venice, Italy, 2023, pp. 340–347.
[61]

M. Z. Baig and M. Kavakli, A survey on psycho-physiological analysis & measurement methods in multimodal systems, Multimodal Technol. Interact., vol. 3, no. 2, pp. 37, 2019.

[62]

M. A. Razzaq, J. Hussain, J. Bang, C. H. Hua, F. A. Satti, U. U. Rehman, H. S. M. Bilal, S. T. Kim, and S. Lee, A hybrid multimodal emotion recognition framework for UX evaluation using generalized mixture functions, Sensors, vol. 23, no. 9, p. 4373, 2023.

[63]
J. Binter, S. Boschetti, T. Hladký, H. Prossinger, T. J. Wells, J. Jílková, and D. Říha, Quantifying the rating performance of ambiguous and unambiguous facial expression perceptions under conditions of stress by using wearable sensors, in Proc. 24th Int. Conf. Human-Computer Interaction (HCII 2022), virtual, 2022, pp. 519–529.
[64]

J. Singh, F. Ali, B. Shah, K. S. Bhangu, and D. Kwak, Emotion quantification using variational quantum state fidelity estimation, IEEE Access, vol. 10, pp. 115108–115119, 2022.

[65]

M. Sharma, I. Kandasamy, and W. B. Vasantha, Emotion quantification and classification using the neutrosophic approach to deep learning, Appl. Soft Comput., vol. 148, pp. 110896, 2023.

[66]
U. Sarkar, S. Nag, C. Bhattacharya, S. Sanyal, A. Banerjee, R. Sengupta, and D. Ghosh, Language independent emotion quantification using non linear modeling of speech, arXiv preprint arXiv: 2102.06003, 2021
[67]

D. Castelvecchi, Can we open the black box of AI, Nature, vol. 538, no. 7623, pp. 20–23, 2016.

[68]

S. Mohamed, M. T. Png, and W. Isaac, Decolonial AI: Decolonial theory as sociotechnical foresight in artificial intelligence, Philos. Technol, vol. 33, no. 4, pp. 659–684, 2020.

[69]
M. Setzu, R. Guidotti, A. Monreale, F. Turini, D. Pedreschi, and F. Giannotti, GLocalX: from local to global explanations of black box AI models, arXiv preprint arXiv: 2101.07685, 2021.
[70]
P. Verma, S. R. Marpally, and S. Srivastava, Discovering user-interpretable capabilities of black-box planning agents, in Proc. 19th Int. Conf. Principles of Knowledge Representation and Reasoning, Haifa, Israel, 2022, pp. 362–372.
[71]

A. Mehrabian, Analysis of the big-five personality factors in terms of the pad temperament model, Aust. J. Psychol., vol. 48, no. 2, pp. 86–92, 1996.

[72]

R. Plutchik and H. Van Praag, The measurement of suicidality, aggressivity and impulsivity, Prog. Neuro Psychopharmacol. Biol. Psychiatry, vol. 13, pp. S23–S34, 1989.

[73]

A. Mehrabian, Pleasure-arousal-dominance: A general framework for describing and measuring individual differences in Temperament, Curr. Psychol., vol. 14, no. 4, pp. 261–292, 1996.

[74]

A. Mehrabian, Relations among personality scales of aggression, violence, and empathy: Validational evidence bearing on the risk of eruptive violence scale, 3.0.CO;2-H">Aggress. Behav., vol. 23, no. 6, pp. 433–445, 1997.

[75]

A. Mehrabian, Beyond IQ: Broad-based measurement of individual success potential or “emotional intelligence”, Genet. Soc. Gen. Psychol. Monogr., vol. 126, no. 2, pp. 133–239, 2000.

[76]

F. Liu, H. Y. Wang, S. Y. Shen, X. Jia, J. Y. Hu, J. H. Zhang, X. Y. Wang, Y. Lei, A. M. Zhou, J. Y. Qi, et al., OPO-FCM: A computational affection based OCC-PAD-OCEAN federation cognitive modeling approach, IEEE Trans. Comput. Soc. Syst., vol. 10, no. 4, pp. 1813–1825, 2023.

[77]

C. Nardelli, From emotion regulation to emotion regulation flexibility, Nat. Rev. Psychol., vol. 2, no. 11, pp. 660–660, 2023.

[78]

A. Nair, R. B. Rutledge, and L. Mason, Under the hood: Using computational psychiatry to make psychological therapies more mechanism-focused, Front. Psychiatry, vol. 11, pp. 140, 2020.

[79]

B. Ribba, Reinforcement learning as an innovative model-based approach: Examples from precision dosing, digital health and computational psychiatry, Front. Pharmacol., vol. 13, pp. 1094281, 2023.

[80]

N. R. Ging-Jehli, H. C. Kraemer, L. Eugene Arnold, M. E. Roley-Roberts, and R. de Beus, Cognitive markers for efficacy of neurofeedback for attention-deficit hyperactivity disorder–personalized medicine using computational psychiatry in a randomized clinical trial, J. Clin. Exp. Neuropsychol., vol. 45, no. 2, pp. 118–131, 2023.

[81]
M. Ennis, Behavior quantification as the missing link between fields: Tools for digital psychiatry and their role in the future of neurobiology, arXiv preprint ArXiv: 2305.15385, 2023.
[82]
C. Ruiz, K. Ito, S. Wakamiya, and E. Aramaki, Loneliness in a connected world: Analyzing online activity and expressions on real life relationships of lonely users, in Proc. AAAI 2017 Spring Symp., Stanford, CA, USA, 2017, pp. 726–733.
[83]

J. Lobban and D. Murphy, Military museum collections and art therapy as mental health resources for veterans with PTSD, Int. J. Art Ther., vol. 25, no. 4, pp. 172–182, 2020.

[84]

X. Gómez-Batiste, S. Mateu, S. Serra-Jofre, M. Molas, S. MiR-Roca, J. Amblàs, X. Costa, C. Lasmarías, M. Serrarols, A. Solà-Serrabou, et al., Compassionate communities: Design and preliminary results of the experience of Vic (Barcelona, Spain) caring city, Ann. Palliat. Med., vol. 7, no. S2, pp. S32–S41, 2018.

[85]

J. Wu and J. Xiao, Effectiveness of the neuroimaging techniques in the recognition of psychiatric disorders: A systematic review and meta-analysis of RCTs, Curr. Med. Imag. Rev., vol. 20, p. e260523217379, 2024.

[86]

W. Huang, Elderly depression recognition based on facial micro-expression extraction, Trait. du Signal, vol. 38, no. 4, pp. 1123–1130, 2021.

[87]

X. Li, R. La, Y. Wang, J. Niu, S. Zeng, S. Sun, and J. Zhu, EEG-based mild depression recognition using convolutional neural network, Med. Biol. Eng. Comput., vol. 57, no. 6, pp. 1341–1352, 2019.

[88]

X. Sun, Y. Xu, Y. Zhao, X. Zheng, Y. Zheng, and L. Cui, Multi-granularity graph convolution network for major depressive disorder recognition, IEEE Trans. Neural Syst. Rehabil. Eng., vol. 32, pp. 559–569, 2024.

CAAI Artificial Intelligence Research
Article number: 9150040
Cite this article:
Liu F. Artificial Intelligence in Emotion Quantification : A Prospective Overview. CAAI Artificial Intelligence Research, 2024, 3: 9150040. https://doi.org/10.26599/AIR.2024.9150040

726

Views

846

Downloads

0

Crossref

Altmetrics

Received: 11 May 2024
Revised: 01 July 2024
Accepted: 23 July 2024
Published: 21 August 2024
© The author(s) 2024.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return