Information Aggregation for Constrained Online Control
This paper considers an online control problem involving two controllers. A central controller chooses an action from a feasible set that is determined by time-varying and coupling constraints, which depend on all past actions and states. The central controller's goal is to minimize the cumulative cost; however, the controller has access to neither the feasible set nor the dynamics directly, which are determined by a remote local controller. Instead, the central controller receives only an aggregate summary of the feasibility information from the local controller, which does not know the system costs. We show that it is possible for an online algorithm using feasibility information to nearly match the dynamic regret of an online algorithm using perfect information whenever the feasible sets satisfy a causal invariance criterion and there is a sufficiently large prediction window size. To do so, we use a form of feasibility aggregation based on entropic maximization in combination with a novel online algorithm, named Penalized Predictive Control (PPC) and demonstrate that aggregated information can be efficiently learned using reinforcement learning algorithms. The effectiveness of our approach for closed-loop coordination between central and local controllers is validated via an electric vehicle charging application in power systems.
© 2021 Copyright held by the owner/author. This work is licensed under a Creative Commons Attribution International 4.0 License. Tongxin Li and Steven Low acknowledge the support received from National Science Foundation (NSF) through grants CCF 1637598, ECCS 1931662 and CPS ECCS 1932611. Bo Sun is supported by Hong Kong Research Grant Council (RGC) General Research Fund (Project 16207318). Adam Wierman's research is funded by NSF (AitF-1637598 and CNS-1518941), PIMCO, and Amazon AWS.
Published - 3460085.pdf