AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (3.7 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

Layered Temporal Spatial Graph Attention Reinforcement Learning for Multiplex Networked Industrial Chains Energy Management

School of Computer Science and Engineering, Southeast University, Nanjing 211189, China
School of Software, Southeast University, Nanjing 211189, China
School of Cyber Science and Engineering, Southeast University, Nanjing 211189

Show Author Information

Abstract

Demand response has recently become an essential means for businesses to reduce production costs in industrial chains. Meanwhile, the current industrial chain structure has also become increasingly complex, forming new characteristics of multiplex networked industrial chains. Fluctuations in real-time electricity prices in demand response propagate through the coupling and cascading relationships within and among these network layers, resulting in negative impacts on the overall energy management cost. However, existing demand response methods based on reinforcement learning typically focus only on individual agents without considering the influence of dynamic factors on intra and inter-network relationships. This paper proposes a Layered Temporal Spatial Graph Attention (LTSGA) reinforcement learning algorithm suitable for demand response in multiplex networked industrial chains to address this issue. The algorithm first uses Long Short-Term Memory (LSTM) to learn the dynamic temporal characteristics of electricity prices for decision-making. Then, LTSGA incorporates a layered spatial graph attention model to evaluate the impact of dynamic factors on the complex multiplex networked industrial chain structure. Experiments demonstrate that the proposed LTSGA approach effectively characterizes the influence of dynamic factors on intra- and inter-network relationships within the multiplex industrial chain, enhancing convergence speed and algorithm performance compared with existing state-of-the-art algorithms.

References

[1]

T. Logenthiran, D. Srinivasan, and T. Z. Shun, Demand side management in smart grid using heuristic optimization, IEEE Trans. Smart Grid, vol. 3, no. 3, pp. 1244–1252, 2012.

[2]

Y. C. Li and S. H. Hong, Real-time demand bidding for energy management in discrete manufacturing facilities, IEEE Trans. Ind. Electron., vol. 64, no. 1, pp. 739–749, 2017.

[3]

R. Lu, R. Bai, Z. Luo, J. Jiang, M. Sun, and H. T. Zhang, Deep reinforcement learning-based demand response for smart facilities energy management, IEEE Trans. Ind. Electron., vol. 69, no. 8, pp. 8554–8565, 2022.

[4]

X. Xu, Y. Jia, Y. Xu, Z. Xu, S. Chai, and C. S. Lai, A multi-agent reinforcement learning-based data-driven method for home energy management, IEEE Trans. Smart Grid, vol. 11, no. 4, pp. 3201–3211, 2020.

[5]

R. Lu, Y. C. Li, Y. Li, J. Jiang, and Y. Ding, Multi-agent deep reinforcement learning based demand response for discrete manufacturing systems energy management, Appl. Energy, vol. 276, p. 115473, 2020.

[6]

A. J. Conejo, J. M. Morales, and L. Baringo, Real-time demand response model, IEEE Trans. Smart Grid, vol. 1, no. 3, pp. 236–242, 2010.

[7]

Z. Wang, F. Gao, Q. Zhai, X. Guan, J. Wu, and K. Liu, Electrical load tracking analysis for demand response in energy intensive enterprise, IEEE Trans. Smart Grid, vol. 4, no. 4, pp. 1917–1927, 2013.

[8]

Y. M. Ding, S. H. Hong, and X. H. Li, A demand response energy management scheme for industrial facilities in smart grid, IEEE Trans. Ind. Inform., vol. 10, no. 4, pp. 2257–2269, 2014.

[9]

N. Mahdavi, J. H. Braslavsky, M. M. Seron, and S. R. West, Model predictive control of distributed air-conditioning loads to compensate fluctuations in solar power, IEEE Trans. Smart Grid, vol. 8, no. 6, pp. 3055–3065, 2017.

[10]

M. Yu, S. H. Hong, Y. Ding, and X. Ye, An incentive-based demand response (DR) model considering composited DR resources, IEEE Trans. Ind. Electron., vol. 66, no. 2, pp. 1488–1498, 2019.

[11]

L. Yu, W. Xie, D. Xie, Y. Zou, D. Zhang, Z. Sun, L. Zhang, Y. Zhang, and T. Jiang, Deep reinforcement learning for smart home energy management, IEEE Internet Things J., vol. 7, no. 4, pp. 2751–2762, 2020.

[12]

Z. Li, Y. Li, Y. Liu, P. Wang, R. Lu, and H. B. Gooi, Deep learning based densely connected network for load forecasting, IEEE Trans. Power Syst., vol. 36, no. 4, pp. 2829–2840, 2021.

[13]

S. Sarkar, V. Gundecha, A. Shmakov, S. Ghorbanpour, A. R. Babu, P. Faraboschi, M. Cocho, A. Pichard, and J. Fievez, Multi-agent reinforcement learning controller to maximize energy efficiency for multi-generator industrial wave energy converter, Proc. AAAI Conf. Artif. Intell., vol. 36, no. 11, pp. 12135–12144, 2022.

[14]

B. Wang, Y. Li, W. Ming, and S. Wang, Deep reinforcement learning method for demand response management of interruptible load, IEEE Trans. Smart Grid, vol. 11, no. 4, pp. 3146–3155, 2020.

[15]

J. Wang, D. K. Mishra, L. Li, and J. Zhang, Demand side management and peer-to-peer energy trading for industrial users using two-level multi-agent reinforcement learning, IEEE Trans. Energy Mark. Policy Regul., vol. 1, no. 1, pp. 23–36, 2023.

[16]

L. A. Hurtado, E. Mocanu, P. H. Nguyen, M. Gibescu, and R. I. G. Kamphuis, Enabling cooperative behavior for building demand response based on extended joint action learning, IEEE Trans. Ind. Inform., vol. 14, no. 1, pp. 127–136, 2018.

[17]

S. Wang, J. Duan, D. Shi, C. Xu, H. Li, R. Diao, and Z. Wang, A data-driven multi-agent autonomous voltage control framework using deep reinforcement learning, IEEE Trans. Power Syst., vol. 35, no. 6, pp. 4644–4654, 2020.

[18]

R. Lu, R. Bai, Z. Luo, J. Jiang, M. Sun, and H. T. Zhang, Deep reinforcement learning-based demand response for smart facilities energy management, IEEE Trans. Ind. Electron., vol. 69, no. 8, pp. 8554–8565, 2022.

[19]
D. Gates, T. Mayor, and E. L. Gampenrieder, Global Manufacturing Outlook 2019, https://www.makeuk.org/insights/reports/manufacturingoutlook-2019-q2, 2019.
[20]
R. Lowe, Y. Wu, A. Tamar, J. Harb, P. Abbeel, and I. Mordatch, Multi-agent actor-critic for mixed cooperative-competitive environments, in Proc. 31st Int. Conf. Neural Information Processing Systems, Long Beach, CA, USA, 2017, pp. 6382–6393.
[21]

H. Ryu, H. Shin, and J. Park, Multi-agent actor-critic with hierarchical graph attention network, Proc. AAAI Conf. Artif. Intell., vol. 34, no. 5, pp. 7236–7243, 2020.

[22]
C. Yu, A. Velu, E. Vinitsky, Y. Wang, and Y. Wu, The surprising effectiveness of MAPPO in cooperative multi-agent games, Neural Information Processing Systems, doi: 10.48550/arXiv.2103.01955.
Tsinghua Science and Technology
Pages 528-542
Cite this article:
Jiang Y, Di K, Wu X, et al. Layered Temporal Spatial Graph Attention Reinforcement Learning for Multiplex Networked Industrial Chains Energy Management. Tsinghua Science and Technology, 2025, 30(2): 528-542. https://doi.org/10.26599/TST.2023.9010111

21

Views

1

Downloads

0

Crossref

0

Web of Science

0

Scopus

0

CSCD

Altmetrics

Received: 24 July 2023
Revised: 24 August 2023
Accepted: 16 September 2023
Published: 09 December 2024
© The Author(s) 2025.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return