Mobile-edge computing casts the computation-intensive and delay-sensitive applications of mobile devices onto network edges. Task offloading incurs extra communication latency and energy cost, and extensive efforts have focused on offloading schemes. Many metrics of the system utility are defined to achieve satisfactory quality of experience. However, most existing works overlook the balance between throughput and fairness. This study investigates the problem of finding an optimal offloading scheme in which the objective of optimization aims to maximize the system utility for leveraging between throughput and fairness. Based on Karush-Kuhn-Tucker condition, the expectation of time complexity is analyzed to derive the optimal scheme. A gradient-based approach for utility-aware task offloading is given. Furthermore, we provide an increment-based greedy approximation algorithm with
- Article type
- Year
- Co-author
For desirable quality of service, content providers aim at covering content requests by large network caches. Content caching has been considered as a fundamental module in network architecture. There exist few studies on the optimization of content caching. Most existing works focus on the design of content measurement, and the cached content is replaced by a new one based on the given metric. Therefore, the performance for service provision with multiple levels is decreased. This paper investigates the problem of finding optimal timer for each content. According to the given timer, the caching policies determine whether to cache a content and which existing content should be replaced, when a content miss occurs. Aiming to maximize the aggregate utility with capacity constraint, this problem is formalized as an integer optimization problem. A linear programming based approximation algorithm is proposed, and the approximation ratio is proved. Furthermore, the problem of content caching with relaxed constraints is given. A Lagrange multiplier based approximation algorithm with polynomial time complexity is proposed. Experimental results show that the proposed algorithms have better performance.