AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (16 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

Context-Aware Social Media User Sentiment Analysis

School of Computer Science and Engineering, Southeast University, Nanjing 211189, China
Key Laboratory of Computer Network and Information of Ministry of Education of China, Nanjing 211189, China.
Microsoft Research Asia, Suzhou 215000, China.
School of Cyber Science and Engineering, Southeast University, Nanjing 211189, China.
Department of Computer Science and Creative Technologies, University of the West of England, Bristol, BS16 1QY, UK.
Show Author Information

Abstract

The user-generated social media messages usually contain considerable multimodal content. Such messages are usually short and lack explicit sentiment words. However, we can understand the sentiment associated with such messages by analyzing the context, which is essential to improve the sentiment analysis performance. Unfortunately, majority of the existing studies consider the impact of contextual information based on a single data model. In this study, we propose a novel model for performing context-aware user sentiment analysis. This model involves the semantic correlation of different modalities and the effects of tweet context information. Based on our experimental results obtained using the Twitter dataset, our approach is observed to outperform the other existing methods in analysing user sentiment.

References

[1]
B. Liu, Sentiment analysis and opinion mining, Synthesis Lectures on Human Language Technologies, vol. 5, no. 1, pp. 1-167, 2012.
[2]
D. Paul, F. Li, M. K. Teja, X. Yu, and R. Frost, Compass: Spatio temporal sentiment analysis of US Election what Twitter says!, in Proc. of the 23rd ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining, Halifax, Canada, 2017, pp. 1585-1594.
[3]
M. De Choudhury, S. Counts, E. J. Horvitz, and A. Hoff, Characterizing and predicting postpartum depression from shared Facebook data, in Proc. of the 17th ACM Conf. on Computer Supported Cooperative Work & Social Computing, Baltimore, MD, USA, 2014, pp. 626-638.
[4]
Z. Zhai, H. Xu, and P. Jia, An empirical study of unsupervised sentiment classification of Chinese reviews, Tsinghua Science and Technology, vol. 15, no. 6, pp. 702-708, 2010.
[5]
Q. Ye, Y. Li, and Y. Zhang, Semantic-oriented sentiment classification for Chinese product reviews: An experimental study of book and cell phone reviews, Tsinghua Science and Technology, vol. 10, no. S1, pp. 797-802, 2005.
[6]
C. Tao, Analyzing image tweets in Microblogs, Ph.D. dissertation, School of Computer, National University Of Singapore, Singapore, 2016.
[7]
T. Chen, D. Lu, M-Y. Kan, and P. Cui, Understanding and classifying image tweets, in Proc. of the 21st ACM Int. Conf. on Multimedia, Barcelona, Spain, 2013, pp. 781-784.
[8]
Y. Yang, J. Jia, S. Zhang, B. Wu, Q. Chen, J. Li, C. Xing, and J. Tang, How do your friends on social media disclose your emotions?, in Proc. of the Twenty-Eighth AAAI Conf. on Artificial Intelligence, Austin, TX, USA, 2014, pp. 1-7.
[9]
Y. Yang, P. Cui, W. Zhu, H. V. Zhao, Y. Shi, and S. Yang, Emotionally representative image discovery for social events, in Proc. of Int. Conf. on Multimedia Retrieval, Glasgow, UK, 2014, p. 177.
[10]
X. Wang, J. Jia, J. Tang, B. Wu, L. Cai, and L. Xie, Modeling emotion influence in image social networks, IEEE Transactions on Affective Computing, vol. 6, no. 3, pp. 286-297, 2015.
[11]
Y. Yang, J. Jia, B. Wu, and J. Tang, Social role-aware emotion contagion in image social networks, in Proc. of the Thirtieth AAAI Conf. on Artificial Intelligence, Phoenix, AZ, USA, 2016, pp. 65-71.
[12]
J. Yang, Y-G. Jiang, A. G. Hauptmann, and C-W. Ngo, Evaluating bag-of-visual-words representations in scene classification, in Proc. of the Int. Workshop on Multimedia Information Retrieval, Augsburg, Germany, 2007, pp. 197-206.
[13]
S. Zhao, Y. Gao, X. Jiang, H. Yao, T-S. Chua, and X. Sun, Exploring principles-of-art features for image emotion recognition, in Proc. of the 22nd ACM Int. Conf. on Multimedia, Orlando, FL, USA, 2014, pp. 47-56.
[14]
J. Yuan, S. Mcdonough, Q. You, and J. Luo, Sentribute: Image sentiment analysis from a mid-level perspective, in Proc. of the Second Int. Workshop on Issues of Sentiment Discovery and Opinion Mining, Chicago, IL, USA, 2013, p. 10.
[15]
D. Borth, R. Ji, T. Chen, T. Breuel, and S-F. Chang, Large-scale visual sentiment ontology and detectors using adjective noun pairs, in Proc. of the 21st ACM Int. Conf. on Multimedia, Barcelona, Spain, 2013, pp. 223-232.
[16]
M. Wang, D. Cao, L. Li, S. Li, and R. Ji, Microblog sentiment analysis based on cross-media bag-of-words model, in Proc. of Int. Conf. on Internet Multimedia Computing and Service, Xiamen, China, 2014, p. 76.
[17]
M. Katsurai and S. Satoh, Image sentiment analysis using latent correlations among visual, textual, and sentiment views, in 2016 IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, 2016, pp. 2837-2841.
[18]
Q. You, J. Luo, H. Jin, and J. Yang, Cross-modality consistent regression for joint visual-textual sentiment analysis of social multimedia, in Proc. of the Ninth ACM Int. Conf. on Web Search and Data Mining, San Francisco, CA, USA, 2016, pp. 13-22.
[19]
C. Baecchi, T. Uricchio, M. Bertini, and A. Del Bimbo, A multimodal feature learning approach for sentiment analysis of social network multimedia, Multimedia Tools and Applications, vol. 75, no. 5, pp. 2507-2525, 2016.
[20]
N. Xu and W. Mao, MultiSentiNet: A deep semantic network for multimodal sentiment analysis, in Proc. of the 2017 ACM on Conf. on Information and Knowledge Management, Singapore, 2017, pp. 2399-2402.
[21]
T. Niu, S. Zhu, L. Pang, and A. El Saddik, Sentiment analysis on multi-view social data, in Int. Conf. on Multimedia Modeling, Miami, FL, USA, 2016, pp. 15-27.
[22]
D. Cao, R. Ji, D. Lin, and S. Li, A cross-media public sentiment analysis system for microblog, Multimedia Systems, vol. 22, no. 4, pp. 479-486, 2016.
[23]
X. Hu, L. Tang, J. Tang, and H. Liu, Exploiting social relations for sentiment analysis in microblogging, in Proc. of the Sixth ACM Int. Conf. on Web Search and Data Mining, 2013, Rome, Italy, pp. 537-546.
[24]
X. Hu, L. Tang, J. Tang, H. Gao, and H. Liu, Unsupervised sentiment analysis with emotional signals, in Proc. of the 22nd Int. Conf. on World Wide Web, Rio de Janeiro, Brazil, 2013, pp. 607-618.
[25]
Y. Wang, Y. Hu, S. Kambhampati, and B. Li, Inferring sentiment from web images with joint inference on visual and social cues: A regulated matrix factorization approach, in Ninth Int. AAAI Conf. on Web and Social Media, Oxford, UK, 2015, pp. 473-482.
[26]
A. Vanzo, D. Croce, and R. Basili, Inferring sentiment from web images with joint inference on visual and social cues: A regulated matrix factorization approach, in Proc. of COLING 2014, the 25th Int. Conf. on Computational Linguistics: Technical Papers, Dublin, Ireland, 2014, pp. 2345-2354.
[27]
S. Zhao, H. Yao, Y. Gao, R. Ji, W. Xie, X. Jiang, and T-S. Chua, Predicting personalized emotion perceptions of social images, in Proc. of the 2016 ACM on Multimedia Conf., Amsterdam, The Netherlands, 2016, pp. 1385-1394.
[28]
C. Lin and Y. He, Joint sentiment/topic model for sentiment analysis, in Proc. of the 18th ACM Conf. on Information and Knowledge Management, Hong Kong, China, 2009, pp. 375-384.
[29]
G. O. Roberts and J. S. Rosenthal, Examples of adaptive MCMC, Journal of Computational and Graphical Statistics, vol. 18, no. 2, pp. 349-367, 2009.
[30]
C. Andrieu, N. De Freitas, A. Doucet, and M. I. Jordan, An introduction to MCMC for machine learning, Machine Learning, vol. 50, nos. 1&2, pp. 5-43, 2003.
[31]
P. Resnik and E. Hardisty, Gibbs sampling for the uninitiateds, Report, Institute for Advanced Computer Studies, Unveristy of Maryland, College Prk, MD, USA, 2010.
[32]
B. Han and T. Baldwin, Lexical normalisation of short text messages: Makn sens a# twitter, in Proc. of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, Portland, OR, USA, 2011, pp. 368-378.
[33]
T. Hu, H. Guo, H. Sun, T. T. Nguyen, and J. Luo, Spice up your chat: The intentions and sentiment effects of using emoji, in Proc. of the Eleventh Int. AAAI Conf. on Web and Social Media, Montreal, Canada, 2017, pp. 102-111.
[34]
P. F. Felzenszwalb and D. P. Huttenlocher, Efficient graph-based image segmentation, International Journal of Computer Vision, vol. 59, no. 2, pp. 167-181, 2004.
[35]
P. Valdez and A. Mehrabian, Effects of color on emotions, Journal of Experimental Psychology: General, vol. 123, no. 4, p. 394, 1994.
[36]
H. Tamura, S. Mori, and T. Yamawaki, Textural features corresponding to visual perception, IEEE Transactions on Systems, Man, and Cybernetics, vol. 8, no. 6, pp. 460-473, 1978.
[37]
C. Lin, Y. He, R. Everson, and S. Ruger, Weakly supervised joint sentiment-topic detection from text, IEEE Transactions on Knowledge and Data Engineering, vol. 24, no. 6, pp. 1134-1145, 2012.
[38]
P. K. Novak, J. Smailović, B. Sluban, and I. Mozetič, Sentiment of emojis, PloS One, vol. 10, no. 12, p. e0144296, 2015.
[39]
H. Gregor, Parameter estimation for text analysis, Report, Fraunhofer Institute for Computer Graphics Research, Darmstadt, Germany, 2005.
[40]
M. Thelwall, K. Buckley, and G. Paltoglou, Sentiment strength detection for the social web, Journal of the American Society for Information Science and Technology, vol. 63, no. 1, pp. 163-173, 2012.
[41]
Z. Zhou, Machine Learning. Beijing, China: Tsinghua University Press, 2016.
Tsinghua Science and Technology
Pages 528-541
Cite this article:
Liu B, Tang S, Sun X, et al. Context-Aware Social Media User Sentiment Analysis. Tsinghua Science and Technology, 2020, 25(4): 528-541. https://doi.org/10.26599/TST.2019.9010021

729

Views

47

Downloads

19

Crossref

N/A

Web of Science

26

Scopus

5

CSCD

Altmetrics

Received: 20 April 2019
Revised: 08 May 2019
Accepted: 14 May 2019
Published: 13 January 2020
© The author(s) 2020

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return