Earthen ruins contain rich historical value. Affected by wind speed, temperature, and other factors, their survival conditions are not optimistic. Time series prediction provides more information for ruins protection. This work includes two challenges: (1) The ruin is located in an open environment, causing complex nonlinear temporal patterns. Furthermore, the usual wind speed monitoring requires the 10 meters observation height to reduce the influence of terrain. However, in order to monitor wind speed around the ruin, we have to set 4.5 meters observation height according to the ruin, resulting in a non-periodic and oscillating temporal pattern of wind speed; (2) The ruin is located in the arid and uninhabited region of northwest China, which results in accelerating aging of equipment and difficulty in maintenance. It significantly amplifies the device error rate, leading to duplication, missing, and outliers in datasets. To address these challenges, we designed a complete preprocessing and a Transformer-based multi-channel patch model. Experimental results on four datasets that we collected show that our model outperforms the others. Ruins climate prediction model can timely and effectively predict the abnormal state of the environment of the ruins. This provides effective data support and decision-making for ruins conservation, and exploring the relationship between the environmental conditions and the living state of the earthen ruins.
- Article type
- Year
- Co-author
The influence of non-Independent Identically Distribution (non-IID) data on Federated Learning (FL) has been a serious concern. Clustered Federated Learning (CFL) is an emerging approach for reducing the impact of non-IID data, which employs the client similarity calculated by relevant metrics for clustering. Unfortunately, the existing CFL methods only pursue a single accuracy improvement, but ignore the convergence rate. Additionlly, the designed client selection strategy will affect the clustering results. Finally, traditional semi-supervised learning changes the distribution of data on clients, resulting in higher local costs and undesirable performance. In this paper, we propose a novel CFL method named ASCFL, which selects clients to participate in training and can dynamically adjust the balance between accuracy and convergence speed with datasets consisting of labeled and unlabeled data. To deal with unlabeled data, the prediction labels strategy predicts labels by encoders. The client selection strategy is to improve accuracy and reduce overhead by selecting clients with higher losses participating in the current round. What is more, the similarity-based clustering strategy uses a new indicator to measure the similarity between clients. Experimental results show that ASCFL has certain advantages in model accuracy and convergence speed over the three state-of-the-art methods with two popular datasets.
A trusted execution environment (TEE) is a system-on-chip and CPU system with a wide security solution available on today’s Arm application (APP) processors, which dominate the smartphone market. Generally, mobile APPs create a trusted application (TA) in the TEE to process sensitive information, such as payment or message encryption, which is transparent to the APPs running in the rich execution environments (REEs). In detail, the REE and TEE interact and eventually send back the results to the APP in the REE through the interface provided by the TA. Such an operation definitely increases the overhead of mobile APPs. In this paper, we first present a comprehensive analysis of the performance of open-source TEE encrypted text. We then propose a high energy-efficient task scheduling strategy (ETS-TEE). By leveraging the deep learning algorithm, our policy considers the complexity of TA tasks, which are dynamically scheduled between modeling on the local device and offloading to an edge server. We evaluate our approach on Raspberry Pi 3B as the local mobile device and Jetson TX2 as the edge server. The results show that compared with the default scheduling strategy on the local device, our approach achieves an average of 38.0