AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (4.5 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Article | Open Access

Feature-Centric Video Transmission and Analytics in Large-Scale Internet of Video Things

Hongan Wei1Yuxiang Liu1Kejian Hu1Liqun Lin1,2( )Youjia Chen1Tiesong Zhao1Wanjian Feng3
Fujian Key Lab for Intelligent Processing and Wireless Transmission of Media Information, College of Physics and Information Engineering, Fuzhou University, Fuzhou 350116, China
Fujian Science & Technology Innovation Laboratory for Optoelectronic Information of China, Fuzhou 350108, China
Yealink Inc., Xiamen 361009, China
Show Author Information

Abstract

The interconnection of large-scale visual sensors is called the Internet of Video Things (IoVT), which brings a qualitative leap to the interaction of urban information. However, communication delay and resource allocation have brought challenges to the development of IoVT. In this paper, we propose a novel city surveillance IoVT architecture to improve performance. This paradigm consists of front-end target region capture, edge computing and cloud-end feature matching, which can adapt the channel and computing resource allocation ratio flexibly, avoiding communication link congestion caused by unnecessary video uploading. Simulation results show that the proposed scheme is feasible, and can realize efficient data transmission and analysis in an IoVT-based smart city.

References

[1]

A. Al-Fuqaha, M. Guizani, M. Mohammadi, M. Aledhari, and M. Ayyash, Internet of Things: A survey on enabling technologies, protocols, and applications, IEEE Commun. Surv. Tutorials, vol. 17, no. 4, pp. 2347–2376, 2015.

[2]

Z. Zhao, Y. Lai, Y. Wang, W. Jia, and H. He, A few-shot learning based approach to IoT traffic classification, IEEE Commun. Lett., vol. 26, no. 3, pp. 537–541, 2022.

[3]

L. Duan, Y. Lou, S. Wang, W. Gao, and Y. Rui, AI-oriented large-scale video management for smart city: Technologies, standards, and beyond, IEEE MultiMedia, vol. 26, no. 2, pp. 8–20, 2019.

[4]

Y. Lou, L. Y. Duan, Y. Luo, Z. Chen, T. Liu, S. Wang, and W. Gao, Towards efficient front-end visual sensing for digital retina: A model-centric paradigm, IEEE Trans. Multimedia, vol. 22, no. 11, pp. 3002–3013, 2020.

[5]

C. W. Chen, Internet of video things: Next-generation IoT with visual sensors, IEEE Internet Things J., vol. 7, no. 8, pp. 6676–6685, 2020.

[6]

Y. Chen, T. Zhao, P. Cheng, M. Ding, and C. W. Chen, Joint front-edge-cloud IoVT analytics: Resource-effective design and scheduling, IEEE Internet Things J., vol. 9, no. 23, pp. 23941–23953, 2022.

[7]
O. Kochan, M. Beshley, H. Beshley, Y. Shkoropad, I. Ivanochko, and N. Seliuchenko, SDN-based Internet of video things platform enabling real-time edge/cloud video analytics, in Proc. 17th Int. Conf. Experience of Designing and Application of CAD Systems (CADSM), Jaroslaw, Poland, 2023, pp. 1–5.
[8]

X. Liu, Research on intelligent visual image feature region acquisition algorithm in Internet of Things framework, Comput. Commun., vol. 151, pp. 299–305, 2020.

[9]

R. Kilani, A. Zouinkhi, E. Bajic, and M. N. Abdelkrim, Socialization of smart communicative objects in industrial Internet of Things, IFAC-PapersOnLine, vol. 55, no. 10, pp. 1924–1929, 2022.

[10]

C. Zhuansun, K. Yan, G. Zhang, Z. Xiong, and C. Huang, Hypergraph-based resource allocation for ultra-dense wireless network in industrial IoT, IEEE Commun. Lett., vol. 26, no. 9, pp. 2106–2110, 2022.

[11]

W. Ji, L. Duan, X. Huang, and Y. Chai, Astute video transmission for geographically dispersed devices in visual IoT systems, IEEE Trans. Mob. Comput., vol. 21, no. 2, pp. 448–464, 2022.

[12]

W. J. Thompson, Poisson distributions, Comput. Sci. Eng., vol. 3, no. 3, pp. 78–82, 2001.

[13]
T. H. Wu, T. W. Wang, and Y. Q. Liu, Real-time vehicle and distance detection based on improved yolo v5 network, in Proc. 3rd World Symp. on Artificial Intelligence (WSAI), Guangzhou, China, 2021, pp. 24–28.
[14]

C. E. Shannon, A mathematical theory of communication, Bell Syst. Tech. J., vol. 27, no. 3, pp. 379–423, 1948.

[15]

Q. Ye, W. Zhuang, X. Li, and J. Rao, End-to-end delay modeling for embedded VNF chains in 5G core networks, IEEE Internet Things J., vol. 6, no. 1, pp. 692–704, 2019.

[16]
P. Dendorfer, H. Rezatofighi, A. Milan, J. Shi, D. Cremers, I. Reid, S. Roth, K. Schindler, and L. Leal-Taixé, MOT20: A benchmark for multi object tracking in crowded scenes, arXiv preprint arXiv: 2003.09003, 2020.
[17]

Z. Li, S. Lu, L. Lan, and Q. Liu, Crowd counting in complex scenes based on an attention aware CNN network, J. Vis. Commun. Image Represent., vol. 87, p. 103591, 2022.

[18]
J. Jin, J. Ma, L. Liu, L. Lu, G. Wu, D. Huang, and N. Qin, Design of UAV video and control signal real-time transmission system based on 5G network, in Proc. IEEE 16th Conf. Industrial Electronics and Applications (ICIEA), Chengdu, China, 2021, pp. 533–537.
[19]

J. Ren, G. Yu, Y. He, and G. Y. Li, Collaborative cloud and edge computing for latency minimization, IEEE Trans. Veh. Technol., vol. 68, no. 5, pp. 5031–5044, 2019.

[20]

W. Gao, Y. Tian, and J. Wang, Digital retina: Revolutionizing camera systems for the smart city (in Chinese), Sci. Sin. Informationis, vol. 48, no. 8, pp. 1076–1082, 2018.

CAAI Artificial Intelligence Research
Article number: 9150028
Cite this article:
Wei H, Liu Y, Hu K, et al. Feature-Centric Video Transmission and Analytics in Large-Scale Internet of Video Things. CAAI Artificial Intelligence Research, 2024, 3: 9150028. https://doi.org/10.26599/AIR.2024.9150028
Part of a topical collection:

583

Views

76

Downloads

0

Crossref

Altmetrics

Received: 10 August 2023
Revised: 23 November 2023
Accepted: 19 January 2024
Published: 22 April 2024
© The author(s) 2024.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return