CaltechAUTHORS
  A Caltech Library Service

Reinforcement Learning for Selective Key Applications in Power Systems: Recent Advances and Future Challenges

Chen, Xin and Qu, Guannan and Tang, Yujie and Low, Steven and Li, Na (2022) Reinforcement Learning for Selective Key Applications in Power Systems: Recent Advances and Future Challenges. IEEE Transactions on Smart Grid, 13 (4). pp. 2935-2958. ISSN 1949-3053. doi:10.1109/tsg.2022.3154718. https://resolver.caltech.edu/CaltechAUTHORS:20220307-188369000

[img] PDF - Accepted Version
See Usage Policy.

3MB
[img] PDF (arXiv) - Accepted Version
Creative Commons Attribution Non-commercial No Derivatives.

3MB

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20220307-188369000

Abstract

With large-scale integration of renewable generation and distributed energy resources, modern power systems are confronted with new operational challenges, such as growing complexity, increasing uncertainty, and aggravating volatility. Meanwhile, more and more data are becoming available owing to the widespread deployment of smart meters, smart sensors, and upgraded communication networks. As a result, data-driven control techniques, especially reinforcement learning (RL), have attracted surging attention in recent years. This paper provides a comprehensive review of various RL techniques and how they can be applied to decision-making and control in power systems. In particular, we select three key applications, i.e., frequency regulation, voltage control, and energy management, as examples to illustrate RL-based models and solutions. We then present the critical issues in the application of RL, i.e., safety, robustness, scalability, and data. Several potential future directions are discussed as well.


Item Type:Article
Related URLs:
URLURL TypeDescription
https://doi.org/10.1109/TSG.2022.3154718DOIArticle
https://arxiv.org/abs/2102.01168arXivDiscussion Paper
ORCID:
AuthorORCID
Chen, Xin0000-0002-0952-0008
Qu, Guannan0000-0002-5466-3550
Tang, Yujie0000-0002-4921-8372
Low, Steven0000-0001-6476-3048
Li, Na0000-0001-9545-3050
Additional Information:© 2022 IEEE. Manuscript received February 9, 2021; revised July 27, 2021 and November 17, 2021; accepted February 18, 2022. Date of publication February 25, 2022; date of current version June 21, 2022. This work was supported in part by NSF CAREER under Grant ECCS-1553407; in part by the NSF AI Institute under Grant 2112085; in part by NSF under Grant ECCS-1931662, Grant AitF- 1637598, and Grant CNS-1518941; in part by Cyber-Physical Systems (CPS) under Grant ECCS-1932611; in part by Resnick Sustainability Institute; in part by PIMCO Fellowship; in part by Amazon AI4Science Fellowship; and in part by the Caltech Center for Autonomous Systems and Technologies (CAST). Paper no. TSG-00195-2021.
Group:Center for Autonomous Systems and Technologies (CAST), Resnick Sustainability Institute
Funders:
Funding AgencyGrant Number
NSFECCS-1553407
NSFCBET-2112085
NSFECCS-1931662
NSFCCF-1637598
NSFCNS-1518941
NSFECCS-1932611
Resnick Sustainability InstituteUNSPECIFIED
PIMCOUNSPECIFIED
Amazon AI4Science FellowshipUNSPECIFIED
Center for Autonomous Systems and TechnologiesUNSPECIFIED
Subject Keywords:Frequency regulation, voltage control, energy management, reinforcement learning, smart grid
Issue or Number:4
DOI:10.1109/tsg.2022.3154718
Record Number:CaltechAUTHORS:20220307-188369000
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:20220307-188369000
Official Citation:X. Chen, G. Qu, Y. Tang, S. Low and N. Li, "Reinforcement Learning for Selective Key Applications in Power Systems: Recent Advances and Future Challenges," in IEEE Transactions on Smart Grid, vol. 13, no. 4, pp. 2935-2958, July 2022, doi: 10.1109/TSG.2022.3154718
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:113762
Collection:CaltechAUTHORS
Deposited By: George Porter
Deposited On:08 Mar 2022 15:05
Last Modified:12 Jul 2022 18:13

Repository Staff Only: item control page