Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
Demand response has recently become an essential means for businesses to reduce production costs in industrial chains. Meanwhile, the current industrial chain structure has also become increasingly complex, forming new characteristics of multiplex networked industrial chains. Fluctuations in real-time electricity prices in demand response propagate through the coupling and cascading relationships within and among these network layers, resulting in negative impacts on the overall energy management cost. However, existing demand response methods based on reinforcement learning typically focus only on individual agents without considering the influence of dynamic factors on intra and inter-network relationships. This paper proposes a Layered Temporal Spatial Graph Attention (LTSGA) reinforcement learning algorithm suitable for demand response in multiplex networked industrial chains to address this issue. The algorithm first uses Long Short-Term Memory (LSTM) to learn the dynamic temporal characteristics of electricity prices for decision-making. Then, LTSGA incorporates a layered spatial graph attention model to evaluate the impact of dynamic factors on the complex multiplex networked industrial chain structure. Experiments demonstrate that the proposed LTSGA approach effectively characterizes the influence of dynamic factors on intra- and inter-network relationships within the multiplex industrial chain, enhancing convergence speed and algorithm performance compared with existing state-of-the-art algorithms.
T. Logenthiran, D. Srinivasan, and T. Z. Shun, Demand side management in smart grid using heuristic optimization, IEEE Trans. Smart Grid, vol. 3, no. 3, pp. 1244–1252, 2012.
Y. C. Li and S. H. Hong, Real-time demand bidding for energy management in discrete manufacturing facilities, IEEE Trans. Ind. Electron., vol. 64, no. 1, pp. 739–749, 2017.
R. Lu, R. Bai, Z. Luo, J. Jiang, M. Sun, and H. T. Zhang, Deep reinforcement learning-based demand response for smart facilities energy management, IEEE Trans. Ind. Electron., vol. 69, no. 8, pp. 8554–8565, 2022.
X. Xu, Y. Jia, Y. Xu, Z. Xu, S. Chai, and C. S. Lai, A multi-agent reinforcement learning-based data-driven method for home energy management, IEEE Trans. Smart Grid, vol. 11, no. 4, pp. 3201–3211, 2020.
R. Lu, Y. C. Li, Y. Li, J. Jiang, and Y. Ding, Multi-agent deep reinforcement learning based demand response for discrete manufacturing systems energy management, Appl. Energy, vol. 276, p. 115473, 2020.
A. J. Conejo, J. M. Morales, and L. Baringo, Real-time demand response model, IEEE Trans. Smart Grid, vol. 1, no. 3, pp. 236–242, 2010.
Z. Wang, F. Gao, Q. Zhai, X. Guan, J. Wu, and K. Liu, Electrical load tracking analysis for demand response in energy intensive enterprise, IEEE Trans. Smart Grid, vol. 4, no. 4, pp. 1917–1927, 2013.
Y. M. Ding, S. H. Hong, and X. H. Li, A demand response energy management scheme for industrial facilities in smart grid, IEEE Trans. Ind. Inform., vol. 10, no. 4, pp. 2257–2269, 2014.
N. Mahdavi, J. H. Braslavsky, M. M. Seron, and S. R. West, Model predictive control of distributed air-conditioning loads to compensate fluctuations in solar power, IEEE Trans. Smart Grid, vol. 8, no. 6, pp. 3055–3065, 2017.
M. Yu, S. H. Hong, Y. Ding, and X. Ye, An incentive-based demand response (DR) model considering composited DR resources, IEEE Trans. Ind. Electron., vol. 66, no. 2, pp. 1488–1498, 2019.
L. Yu, W. Xie, D. Xie, Y. Zou, D. Zhang, Z. Sun, L. Zhang, Y. Zhang, and T. Jiang, Deep reinforcement learning for smart home energy management, IEEE Internet Things J., vol. 7, no. 4, pp. 2751–2762, 2020.
Z. Li, Y. Li, Y. Liu, P. Wang, R. Lu, and H. B. Gooi, Deep learning based densely connected network for load forecasting, IEEE Trans. Power Syst., vol. 36, no. 4, pp. 2829–2840, 2021.
S. Sarkar, V. Gundecha, A. Shmakov, S. Ghorbanpour, A. R. Babu, P. Faraboschi, M. Cocho, A. Pichard, and J. Fievez, Multi-agent reinforcement learning controller to maximize energy efficiency for multi-generator industrial wave energy converter, Proc. AAAI Conf. Artif. Intell., vol. 36, no. 11, pp. 12135–12144, 2022.
B. Wang, Y. Li, W. Ming, and S. Wang, Deep reinforcement learning method for demand response management of interruptible load, IEEE Trans. Smart Grid, vol. 11, no. 4, pp. 3146–3155, 2020.
J. Wang, D. K. Mishra, L. Li, and J. Zhang, Demand side management and peer-to-peer energy trading for industrial users using two-level multi-agent reinforcement learning, IEEE Trans. Energy Mark. Policy Regul., vol. 1, no. 1, pp. 23–36, 2023.
L. A. Hurtado, E. Mocanu, P. H. Nguyen, M. Gibescu, and R. I. G. Kamphuis, Enabling cooperative behavior for building demand response based on extended joint action learning, IEEE Trans. Ind. Inform., vol. 14, no. 1, pp. 127–136, 2018.
S. Wang, J. Duan, D. Shi, C. Xu, H. Li, R. Diao, and Z. Wang, A data-driven multi-agent autonomous voltage control framework using deep reinforcement learning, IEEE Trans. Power Syst., vol. 35, no. 6, pp. 4644–4654, 2020.
R. Lu, R. Bai, Z. Luo, J. Jiang, M. Sun, and H. T. Zhang, Deep reinforcement learning-based demand response for smart facilities energy management, IEEE Trans. Ind. Electron., vol. 69, no. 8, pp. 8554–8565, 2022.
H. Ryu, H. Shin, and J. Park, Multi-agent actor-critic with hierarchical graph attention network, Proc. AAAI Conf. Artif. Intell., vol. 34, no. 5, pp. 7236–7243, 2020.
The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).