The proliferation of Large Language Models (LLMs) has catalyzed the growth of various industries. It is therefore imperative to ensure the controlled and beneficial application of LLMs across specific domains for downstream tasks through transfer learning, while preserving their general capabilities. We propose a novel and on-device efficient fine-tuning optimization algorithm for LLMs, utilizing federated transfer learning. Specifically, we introduce the Fusion of low Rank Adaptation (FoRA) optimization algorithm from a micro perspective, which enhances multi-dimensional feature aggregation through the addition of efficient parameters. From a meso perspective, we extend the application of the FoRA algorithm across all linear layers within the Transformer architecture to facilitate downstream task performance. Finally, from a macro perspective and with a focus on the medical domain, we incorporate quantization techniques into the federated learning framework to achieve on-device efficient fine-tuning optimization, thereby offering dual protection for data and model integrity. Our results indicate that, compared to existing state-of-the-art methods, our algorithm significantly improves LLM performance while ensuring dual privacy protection of both data and models.


Differential Privacy (DP) stands as a secure and efficient mechanism for privacy preservation, offering enhanced data utility without compromising computational complexity. Its adaptability is evidenced by its integration into blockchain-based Internet of Things (IoT) contexts, including smart wearables, smart homes, etc. Nevertheless, a notable vulnerability surfaces in decentralized environments where existing DP mechanisms falter in withstanding collusion attacks. This vulnerability stems from the absence of an efficient strategy to synchronize the privacy budget consumption and historical query information among all network participants. Adversaries can exploit this weakness, collaborating to inject a substantial volume of queries simultaneously into disparate blockchain nodes to extract more precise results. To address this issue, we propose a novel dual response DP mechanism to preserve privacy in blockchain-based IoT scenarios. It encompasses both direct and indirect response strategies, enabling an adaptive response to external queries, aiming to provide better data utility while preserving privacy. Additionally, this mechanism can synchronize historical query information and privacy budget consumption within the blockchain network to prevent privacy leakage. We employ Relative Error (RE), Mean Square Error (MSE), and privacy budget consumption as evaluation metrics to measure the performance of the proposed mechanism. Experimental outcomes substantiate that the proposed mechanism can adapt to blockchain networks well, affirming its capacity for privacy and great utility.