PDF (14.8 MB)
Collect
Submit Manuscript
Show Outline
Outline
Abstract
Keywords
Show full outline
Hide outline
Open Access | Just Accepted

Perceptually-Driven Video Super Resolution for Mobile Live Streaming: An Adaptive Cloud-Assisted Approach

Chenge Jia1,Rongqing Liu1,Zhiqiang Li1Jie Zheng2Jie Ren1()

1 School of Computer Science, Shaanxi Normal University, Xi’an 710119, China

2 School of Information Science and Technology, Northwest University, Xi’an 710127, China

Chenge Jia and Rongqing Liu contribute equally to this work.

Show Author Information

Abstract

The increasing demand for high-definition live video streaming services on mobile devices is often hindered by unstable network conditions and limited computational capabilities. To address these issues, we introduce an adaptive video super-resolution (VSR) based mobile live streaming method, MOBLIVE. The core idea of MOBLIVE is to selectively offload regions of video frames that critically influence user perception quality to the server-side VSR model for enhancement. To further improve the video quality, we deploy a predictive model to choose the optimal VSR model for each selected region. Additionally, we employ an adaptive graphics processing unit (GPU) scheduling strategy that optimizes the allocation of multiple VSR tasks across multiple GPUs. Experimental results show that our approach outperforms the state-of-the-art method in video multimethod assessment fusion (VMAF) and reduces the latency by an average of 73.7% in the typical networking environment.

Tsinghua Science and Technology
Cite this article:
Jia C, Liu R, Li Z, et al. Perceptually-Driven Video Super Resolution for Mobile Live Streaming: An Adaptive Cloud-Assisted Approach. Tsinghua Science and Technology, 2025, https://doi.org/10.26599/TST.2024.9010132
Metrics & Citations  
Article History
Copyright
Rights and Permissions
Return