AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (14.1 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

InSocialNet: Interactive visual analytics for role-event videos

College of Intelligence and Computing, and School of New Media and Communication, Tianjin University, Tianjin, 300354, China.
School of Computer Science & Informatics, Cardiff University, CF243AA, UK.
Show Author Information

Abstract

Role-event videos are rich in information but challenging to be understood at the story level. The social roles and behavior patterns of characters largely depend on the interactions among characters and the background events. Understanding them requires analysisof the video contents for a long duration, which is beyond the ability of current algorithms designed for analyzing short-time dynamics. In this paper, we propose InSocialNet, an interactive video analytics tool for analyzing the contents of role-event videos. It automatically and dynamically constructs social networks from role-event videos making use of face and expression recognition, and provides a visual interface for interactive analysis of video contents. Together with social network analysis at the back end, InSocialNet supports users to investigate characters, their relationships, social roles, factions, and events in the input video. We conduct case studies to demonstrate the effectiveness of InSocialNet in assisting the harvest of rich information from role-event videos. We believe the current prototype implementation can be extended to applications beyond movie analysis, e.g., social psychology experiments to help understand crowd social behaviors.

References

[1]
P. Khorrami,; T. L. Paine,; K. Brady,; C. Dagli,; T. S. Huang,How deep neural networks can improve emotion recognition on video data. In: Proceedings of the IEEE International Conference on Image Processing, 619-623, 2016.
[2]
M. Kim,; S. Kumar,; V. Pavlovic,; H. Rowley,Face tracking and recognition with visual constraints in real-world videos. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1-8, 2008.
[3]
P. Forczmański,; A. Nowosielski,Multi-view data aggregation for behaviour analysis in video surveillance systems. In: Computer Vision and Graphics. Lecture Notes in Computer Science, Vol. 9972. L. Chmielewski,; A. Datta,; R. Kozera,; K Wojciechowski,. Eds. Springer Cham, 462-473, 2016.
[4]
D. Kagan,; T. Chesney,; M Fire,. Using data science to understand the film industry’s gender gap. arXiv preprint arXiv:1903.06469, 2019.
[5]
J. Lv,; B. Wu,; L. L. Zhou,; H. Wang, StoryRoleNet: Social network construction of role relationship in video. IEEE Access Vol. 6, 25958-25969, 2018.
[6]
C. Yu,; Y. W. Zhong,; T. Smith,; I. Park,; W. X. Huang, Visual data mining of multimedia data for social and behavioral studies. Information Visualization Vol. 8, No. 1, 56-70, 2009.
[7]
M. Tomasi,; S. Pundlik,; A. R. Bowers,; E. Peli,; G. Luo, Mobile gaze tracking system for outdoor walking behavioral studies. Journal of Vision Vol. 16, No. 3, 27, 2016.
[8]
G. A. Bernstein,; T. Hadjiyanni,; K. R. Cullen,; J. W. Robinson,; E. C. Harris,; A. D. Young,; J. Fasching,; N. Walczak,; S. Lee,; V. Morellas,; N. Papanikolopoulos, Use of computer vision tools to identify behavioral markers of pediatric Obsessive-Compulsive disorder: A pilot study. Journal of Child and Adolescent Psychopharmacology Vol. 27, No. 2, 140-147, 2017.
[9]
A. Grover,; J. Leskovec,node2vec: Scalable feature learning for networks. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 855-864, 2016.
[10]
Y. G. Jiang,; Q. Dai,; X. Y. Xue,; W. Liu,; C. W. Ngo,Trajectory-based modeling of human actions with motion reference points. In: Computer Vision - ECCV 2012. Lecture Notes in Computer Science, Vol. 7576. A. Fitzgibbon,; S. Lazebnik,; P. Perona,; Y. Sato,; C Schmid,. Eds. Springer Berlin Heidelberg, 425-438, 2012.
[11]
W. H. Ren,; D. Kang,; Y. D. Tang,; A. B. Chan,Fusing crowd density maps and visual object trackers for people tracking in crowd scenes. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5353-5362, 2018.
[12]
B. Renoust,; T. D. Ngo,; D. D. Le,; S. Satoh,A social network analysis of face tracking in news video. In: Proceedings of the 11th International Conference on Signal-Image Technology & Internet-Based Systems, 474-481, 2015.
[13]
D. T. Schmitt,; S. H. Kurkowski,; M. J. Mendenhall,Building social networks in persistent video surveillance. In: Proceedings of the IEEE International Conference on Intelligence and Security Informatics, 217-219, 2009.
[14]
K. Taha, Disjoint community detection in networks based on the relative association of members. IEEE Transactions on Computational Social Systems Vol. 5, No. 2, 493-507, 2018.
[15]
M. E. J. Newman,; M. Girvan, Finding and evaluating community structure in networks. Physical Review E Vol. 69, No. 2, 026113, 2004.
[16]
M. E. J. Newman, Finding community structure in networks using the eigenvectors of matrices. Physical Review E Vol. 74, No. 3, 036104, 2006.
[17]
P. Pons,; M. Latapy, Computing communities in large networks using random walks. Journal of Graph Algorithms and Applications Vol. 10, No. 2, 191-218, 2006.
[18]
U. N. Raghavan,; R. Albert,; S. Kumara, Near linear time algorithm to detect community structures in large-scale networks. Physical Review E Vol. 76, No. 3, 036106, 2007.
[19]
C. Y. Weng,; W. T. Chu,; J. L. Wu, RoleNet: Movie analysis from the perspective of social networks. IEEE Transactions on Multimedia Vol. 11, No. 2, 256-271, 2009.
[20]
V. Ramanathan,; B. P. Yao,; F. F. Li,Social role discovery in human events. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2475-2482, 2013.
[21]
Q. R. Sun,; B. Schiele,; M. Fritz,A domain based approach to social relation recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 435-444, 2017.
[22]
L. Van der Maaten, Learning a parametric embedding by preserving local structure. In: Proceedings of the 12th International Conference on Artificial Intelligence and Statistics, 384-391, 2009.
[23]
M. Avril,; C. Leclère,; S. Viaux,; S. Michelet,; C. Achard,; S. Missonnier,; M. Keren,; D. Cohen,; M. Chetouani, Social signal processing for studying parent-infant interaction. Frontiers in Psychology Vol. 5, 1437, 2014.
[24]
H. S. Park,; E. Jain,; Y. Sheikh,Predicting primary gaze behavior using social saliency fields. In: Proceedings of the IEEE International Conference on Computer Vision, 3503-3510, 2013.
[25]
M. Vrigkas,; C. Nikou,; I. A. Kakadiaris, Identifying human behaviors using synchronized audio-visual cues. IEEE Transactions on Affective Computing Vol. 8, No. 1, 54-66, 2017.
[26]
R. E. Jack,; O. G. B. Garrod,; H. Yu,; R. Caldara,; P. G. Schyns, Facial expressions of emotion are not culturally universal. Proceedings of the National Academy of Sciences Vol. 109, No. 19, 7241-7244, 2012.
[27]
K. P. Seng,; L. M. Ang, Video analytics for customer emotion and satisfaction at contact centers. IEEE Transactions on Human-Machine Systems Vol. 48, No. 3, 266-278, 2018.
[28]
J. Wang,; Y. Yuan,; G Yu,. Face attention network: An effective face detector for the occluded faces. arXiv preprint arXiv:1711.07246, 2017.
[29]
E. Zhou,; Z. Cao,; Q Yin,. Naive-deep face recognition: Touching the limit of LFW benchmark or not? arXiv preprint arXiv:1501.04690, 2015.
[30]
T. M. J. Fruchterman,; E. M. Reingold, Graph drawing by force-directed placement. Software: Practice and Experience Vol. 21, No. 11, 1129-1164, 1991.
[31]
B. Chikhaoui,; M. Chiazzaro,; S. R. Wang,; M. Sotir, Detecting communities of authority and analyzing their influence in dynamic social networks. ACM Transactions on Intelligent Systems and Technology Vol. 8, No. 6, Article No. 82, 2017.
[32]
A. 1Grover,; J. Leskovec,node2vec: Scalable feature learning for networks. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 855-864, 2016.
[33]
X. K. Ma,; D. Dong, Evolutionary nonnegative matrix factorization algorithms for community detection in dynamic networks. IEEE Transactions on Knowledge and Data Engineering Vol. 29, No. 5, 1045-1058, 2017.
[34]
Z. Q. Lu,; X. Sun,; Y. G. Wen,; G. H. Cao,; T. L. Porta, Algorithms and applications for community detection in weighted networks. IEEE Transactions on Parallel and Distributed Systems Vol. 26, No. 11, 2916-2926, 2015.
[35]
M. Rosvall,; C. T. Bergstrom,Maps of information flow reveal community structure in complex networks. In: Proceedings of the National Academy of Sciences USA, 1118-1123, 2007.
[36]
V. D. Blondel,; J. L. Guillaume,; R. Lambiotte,; E. Lefebvre, Fast unfolding of communities in large networks. Journal of Statistical Mechanics: Theory and Experiment Vol. 2008, No. 10, P10008, 2008.
[37]
Y. P. Xiao,; X. X. Li,; H. H. Wang,; M. Xu,; Y. B. Liu, 3-HBP: A three-level hidden Bayesian link prediction model in social networks. IEEE Transactions on Computational Social Systems Vol. 5, No. 2, 430-443, 2018.
Computational Visual Media
Pages 375-390
Cite this article:
Pan Y, Niu Z, Wu J, et al. InSocialNet: Interactive visual analytics for role-event videos. Computational Visual Media, 2019, 5(4): 375-390. https://doi.org/10.1007/s41095-019-0157-9

835

Views

27

Downloads

5

Crossref

N/A

Web of Science

5

Scopus

0

CSCD

Altmetrics

Revised: 08 December 2019
Accepted: 24 December 2019
Published: 17 January 2020
© The author(s) 2019

This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduc-tion in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www. editorialmanager.com/cvmj.

Return