In the densification of Device-to-Device (D2D)-enabled Social Internet of Things (SIoT) networks, improper allocation of resources can lead to high interference, increased signaling overhead, latency, and disruption of Channel State Information (CSI). In this paper, we formulate the problem of sum throughput maximization as a Mixed Integer Non-Linear Programming (MINLP) problem. The problem is solved in two stages: a tripartite graph-based resource allocation stage and a time-scale optimization stage. The proposed approach prioritizes maintaining Quality of Service (QoS) and resource allocation to minimize power consumption while maximizing sum throughput. Simulated results demonstrate the superiority of the proposed algorithm over standard benchmark schemes. Validation of the proposed algorithm using performance parameters such as sum throughput shows improvements ranging from 17% to 93%. Additionally, the average time to deliver resources to CSI users is minimized by 60.83% through optimal power usage. This approach ensures QoS requirements are met, reduces system signaling overhead, and significantly increases D2D sum throughput compared to the state-of-the-art schemes. The proposed methodology may be well-suited to address the challenges SIoT applications, such as home automation and higher education systems.


The integration of network slicing into a Device-to-Device (D2D) network is a promising technological approach for efficiently accommodating Enhanced Mobile Broadband (eMBB) and Ultra Reliable Low Latency Communication (URLLC) services. In this work, we aim to optimize energy efficiency and resource allocation in a D2D underlay cellular network by jointly optimizing beamforming and Resource Sharing Unit (RSU) selection. The problem of our investigation involves a Mixed-Integer Nonlinear Program (MINLP). To solve the problem effectively, we utilize the concept of the Dinkelbach method, the iterative weighted ℓ1-norm technique, and the principles of Difference of Convex (DC) programming. To simplify the solution, we merge these methods into a two-step process using Semi-Definite Relaxation (SDR) and Successive Convex Approximation (SCA). The integration of network slicing and the optimization of short packet transmission are the proposed strategies to enhance spectral efficiency and satisfy the demand for low-latency and high-data-rate requirement applications. The Simulation results validate that the proposed method outperforms the benchmark schemes, demonstrating higher throughput ranging from 11.79% to 28.67% for URLLC users, and 13.67% to 35.89% for eMBB users, respectively.

The existing literature on device-to-device (D2D) architecture suffers from a dearth of analysis under imperfect channel conditions. There is a need for rigorous analyses on the policy improvement and evaluation of network performance. Accordingly, a two-stage transmit power control approach (named QSPCA) is proposed: First, a reinforcement Q-learning based power control technique and; second, a supervised learning based support vector machine (SVM) model. This model replaces the unified communication model of the conventional D2D setup with a distributed one, thereby requiring lower resources, such as D2D throughput, transmit power, and signal-to-interference-plus-noise ratio as compared to existing algorithms. Results confirm that the QSPCA technique is better than existing models by at least 15.31% and 19.5% in terms of throughput as compared to SVM and Q-learning techniques, respectively. The customizability of the QSPCA technique opens up multiple avenues and industrial communication technologies in 5G networks, such as factory automation.