Logarithmic Communication for Distributed Optimization in Multi-Agent Systems
Classically, the design of multi-agent systems is approached using techniques from distributed optimization such as dual descent and consensus algorithms. Such algorithms depend on convergence to global consensus before any individual agent can determine its local action. This leads to challenges with respect to communication overhead and robustness, and improving algorithms with respect to these measures has been a focus of the community for decades. This paper presents a new approach for multi-agent system design based on ideas from the emerging field of local computation algorithms. The framework we develop, LOcal Convex Optimization (LOCO), is the first local computation algorithm for convex optimization problems and can be applied in a wide-variety of settings. We demonstrate the generality of the framework via applications to Network Utility Maximization (NUM) and the distributed training of Support Vector Machines (SVMs), providing numerical results illustrating the improvement compared to classical distributed optimization approaches in each case.
© 2020 Copyright held by the owner/author(s). This work was supported in part by NSF grants AitF-1637598, CNS-1518941, CPS-154471, the Linde Institute, and an Amazon Fellowship in Artificial Intelligence.
Published - 3410048.3410105.pdf