AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (2.2 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

MPformer: A Transformer-Based Model for Earthen Ruins Climate Prediction

School of Information Technology, Northwest University, Xi’an 710127, China
Show Author Information

Abstract

Earthen ruins contain rich historical value. Affected by wind speed, temperature, and other factors, their survival conditions are not optimistic. Time series prediction provides more information for ruins protection. This work includes two challenges: (1) The ruin is located in an open environment, causing complex nonlinear temporal patterns. Furthermore, the usual wind speed monitoring requires the 10 meters observation height to reduce the influence of terrain. However, in order to monitor wind speed around the ruin, we have to set 4.5 meters observation height according to the ruin, resulting in a non-periodic and oscillating temporal pattern of wind speed; (2) The ruin is located in the arid and uninhabited region of northwest China, which results in accelerating aging of equipment and difficulty in maintenance. It significantly amplifies the device error rate, leading to duplication, missing, and outliers in datasets. To address these challenges, we designed a complete preprocessing and a Transformer-based multi-channel patch model. Experimental results on four datasets that we collected show that our model outperforms the others. Ruins climate prediction model can timely and effectively predict the abnormal state of the environment of the ruins. This provides effective data support and decision-making for ruins conservation, and exploring the relationship between the environmental conditions and the living state of the earthen ruins.

References

[1]
G. E. Box, G. M. Jenkins, G. C. Reinsel, and G. M. Ljung, Time Series Analysis : Forecasting and Control. Hoboken, NJ, USA: John Wiley & Sons, p. 712, 2015.
[2]
H. Hewamalage, C. Bergmeir, and K. Bandara, Recurrent neural networks for time series forecasting, arXiv preprint arXiv: 1901.00069, 2019.
[3]

S. Hochreiter and J. Schmidhuber, Long short-term memory, Neural Comput., vol. 9, no. 8, pp. 1735–1780, 1997.

[4]
J. Chung, C. Gulcehre, K. Cho, and Y. Bengio, Empirical evaluation of gated recurrent neural networks on sequence modeling, arXiv preprint arXiv: 1412.3555, 2014.
[5]

D. Salinas, V. Flunkert, J. Gasthaus, and T. Januschowski, DeepAR: Probabilistic forecasting with autoregressive recurrent networks, Int. J. Forecast., vol. 36, no. 3, pp. 1181–1191, 2020.

[6]
Y. Qin, D. Song, H. Chen, W. Cheng, G. Jiang, and G. Cottrell, A dual-stage attention-based recurrent neural network for time series prediction, arXiv preprint arXiv: 1704.02971, 2017.
[7]
G. Lai, W. C. Chang, Y. Yang, and H. Liu, Modeling long- and short-term temporal patterns with deep neural networks, in Proc. 41st Int. ACM SIGIR Conf. Research & Development in Information Retrieval, Ann Arbor, MI, USA, 2018, pp. 95–104.
[8]
S. Bai, J. Z. Kolter, and V. Koltun, An empirical evaluation of generic convolutional and recurrent networks for sequence modeling, arXiv preprint arXiv: 1803.01271, 2018.
[9]
A. van den Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior, and K. Kavukcuoglu, Wavenet: A generative model for raw audio, arXiv preprint arXiv: 1609.03499, 2016.
[10]
A. Borovykh, S. Bohte, and C. W. Oosterlee, Conditional time series forecasting with convolutional neural networks, arXiv preprint arXiv: 1703.04691, 2017.
[11]
R. Sen, H. F. Yu, and I. S. Dhillon, Think globally, act locally: A deep neural network approach to high-dimensional time series forecasting, in Proc. 33rd Int. Conf. on Neural Information Processing Systems, Vancouver, Canada, no. 435, pp. 4837–4846, 2019.
[12]
H. Zhou, S. Zhang, J. Peng, S. Zhang, J. Li, H. Xiong, and W. Zhang, Informer: Beyond efficient transformer for long sequence time-series forecasting, in Proc. AAAI Conf. on Artificial Intelligence, pp. 11106–11115, 2021.
[13]
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, Attention is all you need, in Proc. 31st Int. Conf. Neural Information Processing Systems, Long Beach, CA, USA, 2017, pp. 6000–6010.
[14]
H. Wu, J. Xu, J. Wang, and M. Long, Autoformer: Decomposition transformers with autocorrelation for long-term series forecasting, arXiv preprint arXiv: 2106.13008, 2021.
[15]
S. Li, X. Jin, Y. Xuan, X. Zhou, W. Chen, Y. X. Wang, and X. Yan, Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting, in Proc. 33rd Int. Conf. on Neural Information Processing Systems, Vancouver, Canada, pp. 5243–5253, 2019.
[16]
T. Zhou, Z. Q. Ma, Q. S. Wen, X. Wang, L. Sun, and R. Jin, Fedformer: Frequency enhanced decomposed transformer for long-term series forecasting, in Proc. Int. Conf. on Machine Learning, Virtual Event, pp. 27268–27286, 2022.
[17]
S. Liu, H. Yu, C. Liao, J. Li, W. Lin, A. X. Liu, and S. Dustdar, Pyraformer: low-complexity pyramidal attention for long-range time series modeling and forecasting, https://openreview.net/forum?id=0EXmFzUn5I, 2022.
[18]
Y. Nie, N. H. Nguyen, P. Sinthong, and J. Kalagnanam, A time series is worth 64 words: Long-term forecasting with transformers, arXiv preprint arXiv: 2211.14730, 2022.
[19]
A. Zeng, M. Chen, L. Zhang, and Q. Xu, Are transformers effective for time series forecasting? arXiv preprint arXiv: 2205.13504, 2022.
[20]
G. Zerveas, S. Jayaraman, D. Patel, A. Bhamidipaty, and C. Eickhoff, A transformer-based framework for multivariate time series representation learning, in Proc. 27th ACM SIGKDD Conf. Knowledge Discovery & Data Mining, Virtual Event, Singapore, 2021, pp. 2114–2124.
[21]

B. Lim, S. Ö. Arık, N. Loeff, and T. Pfister, Temporal fusion transformers for interpretable multi-horizon time series forecasting, Int. J. Forecast., vol. 37, no. 4, pp. 1748–1764, 2021.

[22]
J. Devlin, M. W. Chang, K. Lee, and K. Toutanova, Bert: Pre-training of deep bidirectional transformers for language understanding, arXiv preprint arXiv: 1810.04805, 2018.
[23]
A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. H. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al., An image is worth 16x16 words: Transformers for image recognition at scale, arXiv preprint arXiv: 2010.11929, 2020.
[24]
A. Baevski, Y. Zhou, A. Mohamed, and M. Auli, wav2vec 2.0: A framework for selfsupervised learning of speech representations, in Proc. 34rd Int. Conf. on Neural Information Processing Systems, Red Hook, NY, USA, pp. 12449–12460, 2020.
[25]

W. N. Hsu, B. Bolte, Y. H. H. Tsai, K. Lakhotia, R. Salakhutdinov, and A. Mohamed, HuBERT: Self-supervised speech representation learning by masked prediction of hidden units, IEEE/ACM Trans. Audio Speech Lang. Process., vol. 29, pp. 3451–3460, 2021.

[26]
N. Kitaev, L. Kaiser, and A. Levskaya, Reformer: The efficient transformer, arXiv preprint arXiv: 2001.04451, 2020.
[27]
H. Bao, L. Dong, S. Piao, and F. Wei, Beit: Bert pre-training of image transformers, arXiv preprint arXiv: 2106.08254, 2021.
[28]
K. He, X. Chen, S. Xie, Y. Li, P. Dollar, and R. Girshick, Masked autoencoders are scalable vision learners, in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 2022, pp. 16000–16009.
[29]

F. N. David and J. W. Tukey, Exploratory data analysis, Biometrics, vol. 33, no. 4, p. 768, 1977.

[30]

N. M. Noor, M. M. Al Bakri Abdullah, A. S. Yahaya, and N. A. Ramli, Comparison of linear interpolation method and mean method to replace the missing values in environmental data set, Mater. Sci. Forum, vol. 803, pp. 278–281, 2014.

[31]
M. N. Noor, A. S. Yahaya, N. A. Ramli, and A. M. M. Al Bakri, Filling missing data using interpolation methods: Study on the effect of fitting distribution, presented at the 2013 International Conference on Advanced Materials Engineering and Technology (ICAMET 2013), 2013, Bandung, Indonesia.
[32]
H. Seng, A new approach of moving average method in time series analysis, in Proc. Conf. New Media Studies (CoNMedia), Tangerang, Indonesia, 2013, pp. 1–4.
Tsinghua Science and Technology
Pages 1829-1838
Cite this article:
Xu G, Wang H, Ji S, et al. MPformer: A Transformer-Based Model for Earthen Ruins Climate Prediction. Tsinghua Science and Technology, 2024, 29(6): 1829-1838. https://doi.org/10.26599/TST.2024.9010035

209

Views

23

Downloads

0

Crossref

0

Web of Science

0

Scopus

0

CSCD

Altmetrics

Received: 13 September 2023
Revised: 07 December 2023
Accepted: 06 February 2024
Published: 03 May 2024
© The Author(s) 2024.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return