[1]
R. W. Picard, Affective Computing, Cambridge, MA, USA: MIT Press, 2000.
[6]
S. George, From sex and therapy bots to virtual assistants and tutors: How emotional should artificially intelligent agents be? in Proc. 1st Int. Conf. Conversational User Interfaces, Dublin, Ireland, 2019, pp. 1–3.
[9]
William Cho, https://globaljournals.org/GJMR_Volume22/E-Journal_GJMR_(A)_Vol_22_Issue_3.pdf, 2022.
[14]
E. M. Polo, M. Mollura, M. Lenatti, M. Zanet, A. Paglialonga, and R. Barbieri, Emotion recognition from multimodal physiological measurements based on an interpretable feature selection method, in Proc. 43rd Annu. Int. Conf. IEEE Engineering in Medicine & Biology Society (EMBC), virtual, 2021, pp. 989–992.
[15]
H. Prossinger, T. Hladky, J. Binter, S. Boschetti, and D. Riha, Visual analysis of emotions using AI image-processing software: Possible male/female differences between the emotion pairs “neutral”–“fear” and “pleasure”–“pain”, in Proc. 14th PErvasive Technologies Related to Assistive Environments Conf. (PETRA '21), Corfu, Greece, 2021, pp. 342–346.
[17]
P. Ekman and E. L. Rosenberg, What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS), Oxford, UK: Oxford University Press, 2005.
[18]
J. Zhang, F. Liu, and A. Zhou, Off-TANet: A lightweight neural micro-expression recognizer with optical flow features and integrated attention mechanism, in Proc. 18th Pacific Rim Int. Conf. Artificial Intelligence (PRICAI 2021), Hanoi, Vietnam, 2021, pp. 266–279.
[19]
H. Wang, B. Li, S. Wu, S. Shen, F. Liu, S. Ding, and A. Zhou, Rethinking the learning paradigm for dynamic facial expression recognition, in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), Vancouver, Canada, 2023, pp. 17958–17968.
[20]
F. Ma, B. Sun, and S. Li, Logo-former: Local-global spatio-temporal transformer for dynamic facial expression recognition, in Proc. 2023 IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 2023, pp. 1–5.
[21]
L. Sun, Z. Lian, B. Liu, and J. Tao, MAE-DFER: Efficient masked autoencoder for self-supervised dynamic facial expression recognition, in Proc. 31st ACM Int. Conf. Multimedia, Ottawa, Canada, 2023, pp. 6110–6121.
[24]
K. Chen, Z. Zhang, W. Zeng, R. Zhang, F. Zhu, and R. Zhao, Shikra: Unleashing multimodal llm’s referential dialogue magic, arXiv preprint arXiv: 2306.15195, 2023.
[25]
K. Wang, X. Peng, J. Yang, S. Lu, and Y. Qiao, Suppressing uncertainties for large-scale facial expression recognition, in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 2020, pp. 6896–6905.
[29]
M. Gopika Sri, G. Karthiga, K. Jayakarthika, N. Ilakkiya, V. Lavanyagayathri, and D. Uma Mageswari S, A fuzzy logic and NLP approach to emotion driven response generation for voice interaction, in Proc. Int. Conf. Research Methodologies in Knowledge Management, Artificial Intelligence and Telecommunication Engineering (RMKMATE), Chennai, India, 2023, pp. 1–5.
[30]
K. Zhou, B. Sisman, C. Busso, and H. Li, Mixed emotion modeling for emotional voice conversion, arXiv preprint arXiv: 2210.13756, 2022.
[32]
E. Conti, D. Salvi, C. Borrelli, B. Hosler, P. Bestagini, F. Antonacci, A. Sarti, M. C. Stamm, and S. Tubaro, Deepfake speech detection through emotion recognition: A semantic approach, in Proc. 2022 IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP), Singapore, 2022, pp. 8962–8966.
[33]
A. Adigwe, N. Tits, K. El Haddad, S. Ostadabbas, and T. Dutoit, The emotional voices database: Towards controlling the emotion dimension in voice generation systems, arXiv preprint arXiv: 1806.09514, 2018.
[35]
L. Ahmed, I. K. Polok, M. A. Islam, M. Akhtaruzzaman, M. S. H. Mukta, and M. M. Rahman, Context based emotion recognition from Bengali text using transformers, in Proc. 5th Int. Conf. Smart Systems and Inventive Technology (ICSSIT), Tirunelveli, India, 2023, pp. 1478–1484.
[36]
M. A. Mahima, N. C. Patel, S. Ravichandran, N. Aishwarya, and S. Maradithaya, A text-based hybrid approach for multiple emotion detection using contextual and semantic analysis, in Proc. Int. Conf. Innovative Computing, Intelligent Communication and Smart Electrical Systems (ICSES), Chennai, India, 2021, pp. 1–6.
[37]
K. Hemakirthiga and J. Arunadevi, Improving emotion detection in text: A comparative analysis of machine learning algorithms and genetic algorithm-optimized logistic regression, in Proc. Int. Conf. Data Science, Agents & Artificial Intelligence (ICDSAAI), Chennai, India, 2023, pp. 1–6.
[40]
W. Ahmed, S. Riaz, K. Iftikhar, and S. Konur, Speech emotion recognition using deep learning, in Proc. 43rd SGAI Int. Conf. Artificial Intelligence, Cambridge, UK, 2023, pp. 191–197.
[44]
X. Zhang, The application of natural language processing technology based on deep learning in Japanese sentiment analysis, in Proc. Int. Conf. Ambient Intelligence, Knowledge Informatics and Industrial Electronics (AIKIIE), Ballari, India, 2023, pp. 1–5.
[45]
C. M. A. Ilyas, R. Nunes, K. Nasrollahi, M. Rehm, and T. B. Moeslund, Deep emotion recognition through upper body movements and facial expression, in Proc. 16th Int. Joint Conf. Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2021), virtual, 2021, pp. 669–679.
[46]
Z. Fu, F. Liu, J. Zhang, H. Wang, C. Yang, Q. Xu, J. Qi, X. Fu, and A. Zhou, SAGN: Semantic adaptive graph network for skeleton-based human action recognition, in Proc. 2021 Int. Conf. Multimedia Retrieval, Taipei, China, 2021, pp. 110–117.
[48]
S. Sun, X. Xiong, and Y. Zheng, Two stage multi-modal modeling for video interaction analysis in deep video understanding challenge, in Proc. 30th ACM Int. Conf. Multimedia, Lisboa, Portugal, 2022, pp. 7040–7044.
[49]
Z. Lian, H. Sun, L. Sun, H. Gu, Z. Wen, S. Zhang, S. Chen, M. Xu, K. Xu, K. Chen, et al., Explainable multimodal emotion reasoning: a promising way to open-set emotion recognition, arXiv preprint arXiv: 2306.15401, 2023.
[52]
D. Yang, S. Huang, H. Kuang, Y. Du, and L. Zhang, Disentangled representation learning for multimodal emotion recognition, in Proc. 30th ACM Int. Conf. Multimedia, Lisboa, Portugal, 2022, pp. 1642–1651.
[53]
D. Sun, Y. He, and J. Han, Using auxiliary tasks in multimodal fusion of Wav2vec 2.0 and bert for multimodal emotion recognition, in Proc. 2023 IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 2023, pp. 1–5.
[59]
W. Liu, H. Jiang, and Y. Lu, Research on multimodal emotion recognition platform construction, in Proc. Information Science and Cloud Computing (ISCC 2017), Guangzhou, China, 2018, pp. 1–9.
[60]
Y. Lee, N. Lee, V. Pham, J. Lee, and T. M. Chung, Privacy preserving stress detection system using physiological data fromWearable device, in Proc. 6th Int. Conf. Intelligent Human Systems Integration (IHSI 2023): Integrating People and Intelligent Systems, Venice, Italy, 2023, pp. 340–347.
[63]
J. Binter, S. Boschetti, T. Hladký, H. Prossinger, T. J. Wells, J. Jílková, and D. Říha, Quantifying the rating performance of ambiguous and unambiguous facial expression perceptions under conditions of stress by using wearable sensors, in Proc. 24th Int. Conf. Human-Computer Interaction (HCII 2022), virtual, 2022, pp. 519–529.
[66]
U. Sarkar, S. Nag, C. Bhattacharya, S. Sanyal, A. Banerjee, R. Sengupta, and D. Ghosh, Language independent emotion quantification using non linear modeling of speech, arXiv preprint arXiv: 2102.06003, 2021
[69]
M. Setzu, R. Guidotti, A. Monreale, F. Turini, D. Pedreschi, and F. Giannotti, GLocalX: from local to global explanations of black box AI models, arXiv preprint arXiv: 2101.07685, 2021.
[70]
P. Verma, S. R. Marpally, and S. Srivastava, Discovering user-interpretable capabilities of black-box planning agents, in Proc. 19th Int. Conf. Principles of Knowledge Representation and Reasoning, Haifa, Israel, 2022, pp. 362–372.
[81]
M. Ennis, Behavior quantification as the missing link between fields: Tools for digital psychiatry and their role in the future of neurobiology, arXiv preprint ArXiv: 2305.15385, 2023.
[82]
C. Ruiz, K. Ito, S. Wakamiya, and E. Aramaki, Loneliness in a connected world: Analyzing online activity and expressions on real life relationships of lonely users, in Proc. AAAI 2017 Spring Symp., Stanford, CA, USA, 2017, pp. 726–733.