The proliferation of Large Language Models (LLMs) has catalyzed the growth of various industries. It is therefore imperative to ensure the controlled and beneficial application of LLMs across specific domains for downstream tasks through transfer learning, while preserving their general capabilities. We propose a novel and on-device efficient fine-tuning optimization algorithm for LLMs, utilizing federated transfer learning. Specifically, we introduce the Fusion of low Rank Adaptation (FoRA) optimization algorithm from a micro perspective, which enhances multi-dimensional feature aggregation through the addition of efficient parameters. From a meso perspective, we extend the application of the FoRA algorithm across all linear layers within the Transformer architecture to facilitate downstream task performance. Finally, from a macro perspective and with a focus on the medical domain, we incorporate quantization techniques into the federated learning framework to achieve on-device efficient fine-tuning optimization, thereby offering dual protection for data and model integrity. Our results indicate that, compared to existing state-of-the-art methods, our algorithm significantly improves LLM performance while ensuring dual privacy protection of both data and models.


The popularization of intelligent healthcare devices and big data analytics significantly boosts the development of Smart Healthcare Networks (SHNs). To enhance the precision of diagnosis, different participants in SHNs share health data that contain sensitive information. Therefore, the data exchange process raises privacy concerns, especially when the integration of health data from multiple sources (linkage attack) results in further leakage. Linkage attack is a type of dominant attack in the privacy domain, which can leverage various data sources for private data mining. Furthermore, adversaries launch poisoning attacks to falsify the health data, which leads to misdiagnosing or even physical damage. To protect private health data, we propose a personalized differential privacy model based on the trust levels among users. The trust is evaluated by a defined community density, while the corresponding privacy protection level is mapped to controllable randomized noise constrained by differential privacy. To avoid linkage attacks in personalized differential privacy, we design a noise correlation decoupling mechanism using a Markov stochastic process. In addition, we build the community model on a blockchain, which can mitigate the risk of poisoning attacks during differentially private data transmission over SHNs. Extensive experiments and analysis on real-world datasets have testified the proposed model, and achieved better performance compared with existing research from perspectives of privacy protection and effectiveness.