Online Convex Optimization Using Predictions
Making use of predictions is a crucial, but under-explored, area of online algorithms. This paper studies a class of on-line optimization problems where we have external noisy predictions available. We propose a stochastic prediction error model that generalizes prior models in the learning and stochastic control communities, incorporates correlation among prediction errors, and captures the fact that predictions improve as time passes. We prove that achieving sublinear regret and constant competitive ratio for online algorithms requires the use of an unbounded prediction window in adversarial settings, but that under more realistic stochastic prediction error models it is possible to use Averaging Fixed Horizon Control (AFHC) to simultaneously achieve sublinear regret and constant competitive ratio in expectation using only a constant-sized prediction window. Furthermore, we show that the performance of AFHC is tightly concentrated around its mean.
© 2015 ACM. This work is partially supported by the NSF through CNS-1319820, EPAS-1307794, CNS-0846025, CCF-1101470 and the ARC through DP130101378.
Submitted - 1504.06681.pdf