A Caltech Library Service

Learning-based Predictive Control via Real-time Aggregate Flexibility

Li, Tongxin and Sun, Bo and Chen, Yue and Ye, Zixin and Low, Steven H. and Wierman, Adam (2021) Learning-based Predictive Control via Real-time Aggregate Flexibility. IEEE Transactions on Smart Grid, 12 (6). pp. 4897-4913. ISSN 1949-3053. doi:10.1109/TSG.2021.3094719.

[img] PDF - Accepted Version
See Usage Policy.

[img] PDF - Submitted Version
Creative Commons Attribution.


Use this Persistent URL to link to this item:


Aggregators have emerged as crucial tools for the coordination of distributed, controllable loads. To be used effectively, an aggregator must be able to communicate the available flexibility of the loads they control, as known as the aggregate flexibility to a system operator. However, most of existing aggregate flexibility measures often are slow-timescale estimations and much less attention has been paid to real-time coordination between an aggregator and an operator. In this paper, we consider solving an online optimization in a closed-loop system and present a design of real-time aggregate flexibility feedback, termed the maximum entropy feedback (MEF). In addition to deriving analytic properties of the MEF, combining learning and control, we show that it can be approximated using reinforcement learning and used as a penalty term in a novel control algorithm – the penalized predictive control (PPC), which modifies vanilla model predictive control (MPC). The benefits of our scheme are (1). Efficient Communication . An operator running PPC does not need to know the exact states and constraints of the loads, but only the MEF. (2). Fast Computation . The PPC often has much less number of variables than an MPC formulation. (3). Lower Costs We show that under certain regularity assumptions, the PPC is optimal. We illustrate the efficacy of the PPC using a dataset from an adaptive electric vehicle charging network and show that PPC outperforms classical MPC.

Item Type:Article
Related URLs:
URLURL TypeDescription Paper
Li, Tongxin0000-0002-9806-8964
Sun, Bo0000-0003-3172-7811
Chen, Yue0000-0002-7594-7587
Low, Steven H.0000-0001-6476-3048
Alternate Title:Real-time Aggregate Flexibility via Reinforcement Learning
Additional Information:© 2021 IEEE. Manuscript received December 22, 2020; revised April 5, 2021 and June 1, 2021; accepted June 21, 2021. Date of publication July 5, 2021; date of current version October 21, 2021. The work of Tongxin Li and Steven H. Low was supported by the National Science Foundation (NSF) under Grant CCF 1637598, Grant ECCS 1931662, and Grant CPS ECCS 1932611. The work of Bo Sun was supported by the Hong Kong Research Grant Council (RGC) General Research Fund under Project 16207318. The work of Adam Wierman was supported in part by NSF under Grant AitF-1637598 and Grant CNS-1518941; in part by Amazon AWS; and in part by VMware. Paper no. TSG-01893-2020.
Funding AgencyGrant Number
Hong Kong Research Grant Council16207318
Amazon Web ServicesUNSPECIFIED
Subject Keywords:Aggregate flexibility, closed-loop control systems, online optimization, model predictive control, reinforcement learning, electric vehicle charging
Issue or Number:6
Record Number:CaltechAUTHORS:20210510-084600512
Persistent URL:
Official Citation:T. Li, B. Sun, Y. Chen, Z. Ye, S. H. Low and A. Wierman, "Learning-Based Predictive Control via Real-Time Aggregate Flexibility," in IEEE Transactions on Smart Grid, vol. 12, no. 6, pp. 4897-4913, Nov. 2021, doi: 10.1109/TSG.2021.3094719
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:109022
Deposited By: Tony Diaz
Deposited On:10 May 2021 17:56
Last Modified:28 Oct 2021 22:38

Repository Staff Only: item control page