Mobile Edge Computing (MEC) is a pivotal technology that provides agile-response services by deploying computation and storage resources in proximity to end-users. However, resource-constrained edge servers fall victim to Denial-of-Service (DoS) attacks easily. Failures to mitigate DoS attacks effectively hinder the delivery of reliable and sustainable edge services. Conventional DoS mitigation solutions in cloud computing environments are not directly applicable in MEC environments because their design did not factor in the unique characteristics of MEC environments, e.g., constrained resources on edge servers and requirements for low service latency. Existing solutions mitigate edge DoS attacks by transferring user requests from edge servers under attacks to others for processing. Furthermore, the heterogeneity in end-users’ resource demands can cause resource fragmentation on edge servers and undermine the ability of these solutions to mitigate DoS attacks effectively. User requests often have to be transferred far away for processing, which increases the service latency. To tackle this challenge, this paper presents a fragmentation-aware gaming approach called HEDMGame that attempts to minimize service latency by matching user requests to edge servers’ remaining resources while making request-transferring decisions. Through theoretical analysis and experimental evaluation, we validate the effectiveness and efficiency of HEDMGame, and demonstrate its superiority over the state-of-the-art solution.
- Article type
- Year
- Co-author


Recent years have seen growing demand for the use of edge computing to achieve the full potential of the Internet of Things (IoTs), given that various IoT systems have been generating big data to facilitate modern latency-sensitive applications. Network Dismantling (ND), which is a basic problem, attempts to find an optimal set of nodes that will maximize the connectivity degradation in a network. However, current approaches mainly focus on simple networks that model only pairwise interactions between two nodes, whereas higher-order groupwise interactions among an arbitrary number of nodes are ubiquitous in the real world, which can be better modeled as hypernetwork. The structural difference between a simple and a hypernetwork restricts the direct application of simple ND methods to a hypernetwork. Although some hypernetwork centrality measures (e.g., betweenness) can be used for hypernetwork dismantling, they face the problem of balancing effectiveness and efficiency. Therefore, we propose a betweenness approximation-based hypernetwork dismantling method with a Hypergraph Neural Network (HNN). The proposed approach, called “HND”, trains a transferable HNN-based regression model on plenty of generated small-scale synthetic hypernetworks in a supervised way, utilizing the well-trained model to approximate the betweenness of the nodes. Extensive experiments on five actual hypernetworks demonstrate the effectiveness and efficiency of HND compared with various baselines.

Recommendation systems play a crucial role in uncovering concealed interactions among users and items within online social networks. Recently, Graph Neural Network (GNN)-based recommendation systems exploit higher-order interactions within the user-item interaction graph, demonstrating cutting-edge performance in recommendation tasks. However, GNN-based recommendation models are susceptible to different types of noise attacks, such as deliberate perturbations or false clicks. These attacks propagate through the graph and adversely affect the robustness of recommendation results. Conventional two-stage method that purifies the graph before training the GNN model is suboptimal. To strengthen the model’s resilience to noise, we propose Graph Structure Learning for Robust Recommendation (GSLRRec), a joint learning framework that integrates graph structure learning and GNN model training for recommendation. Specifically, GSLRRec considers the graph adjacency matrix as adjustable parameters, and simultaneously optimizes both the graph structure and the representations of user/item nodes for recommendation. During the joint training process, the graph structure learning employs low-rank and sparse constraints to effectively denoise the graph. Our experiments illustrate that the simultaneous learning of both structure and GNN parameters can provide more robust recommendation results under various noise levels.

Modern software development has moved toward agile growth and rapid delivery, where developers must meet the changing needs of users instantaneously. In such a situation, plug-and-play Third-Party Libraries (TPLs) introduce a considerable amount of convenience to developers. However, selecting the exact candidate that meets the project requirements from the countless TPLs is challenging for developers. Previous works have considered setting up a personalized recommender system to suggest TPLs for developers. Unfortunately, these approaches rarely consider the complex relationships between applications and TPLs, and are unsatisfactory in accuracy, training speed, and convergence speed. In this paper, we propose a new end-to-end recommendation model called Neighbor Library-Aware Graph Neural Network (NLA-GNN). Unlike previous works, we only initialize one type of node embedding, and construct and update all types of node representations using Graph Neural Networks (GNN). We use a simplified graph convolution operation to alternate the information propagation process to increase the training efficiency and eliminate the heterogeneity of the app-library bipartite graph, thus efficiently modeling the complex high-order relationships between the app and the library. Extensive experiments on large-scale real-world datasets demonstrate that NLA-GNN achieves consistent and remarkable improvements over state-of-the-art baselines for TPL recommendation tasks.

Dynamic Quality of Service (QoS) prediction for services is currently a hot topic and a challenge for research in the fields of service recommendation and composition. Our paper addresses the problem with a Time-aWare service Quality Prediction method (named TWQP), a two-phase approach with one phase based on historical time slices and one on the current time slice. In the first phase, if the user had invoked the service in a previous time slice, the QoS value for the user calling the service on the next time slice is predicted on the basis of the historical QoS data; if the user had not invoked the service in a previous time slice, then the Covering Algorithm (CA) is applied to predict the missing values. In the second phase, we predict the missing values for the current time slice according to the results of the previous phase. A large number of experiments on a real-world dataset, WS-Dream, show that, when compared with the classical QoS prediction algorithms, our proposed method greatly improves the prediction accuracy.

In the era of big data, data intensive applications have posed new challenges to the field of service composition. How to select the optimal composited service from thousands of functionally equivalent services but different Quality of Service (QoS ) attributes has become a hot research in service computing. As a consequence, in this paper, we propose a novel algorithm MR-IDPSO (MapReduce based on Improved Discrete Particle Swarm Optimization), which makes use of the improved discrete Particle Swarm Optimization (PSO) with the MapReduce to solve large-scale dynamic service composition. Experiments show that our algorithm outperforms the parallel genetic algorithm in terms of solution quality and is efficient for large-scale dynamic service composition. In addition, the experimental results also demonstrate that the performance of MR-IDPSO becomes more better with increasing number of candidate services.