A Caltech Library Service

Differentially Quantized Gradient Methods

Lin, Chung-Yi and Kostina, Victoria and Hassibi, Babak (2022) Differentially Quantized Gradient Methods. IEEE Transactions on Information Theory, 68 (9). pp. 6078-6097. ISSN 0018-9448. doi:10.1109/tit.2022.3171173.

Full text is not posted in this repository. Consult Related URLs below.

Use this Persistent URL to link to this item:


Consider the following distributed optimization scenario. A worker has access to training data that it uses to compute the gradients while a server decides when to stop iterative computation based on its target accuracy or delay constraints. The server receives all its information about the problem instance from the worker via a rate-limited noiseless communication channel. We introduce the principle we call differential quantization (DQ) that prescribes compensating the past quantization errors to direct the descent trajectory of a quantized algorithm towards that of its unquantized counterpart. Assuming that the objective function is smooth and strongly convex, we prove that differentially quantized gradient descent (DQ-GD) attains a linear contraction factor of $\max \{\sigma _{\mathrm {GD}}, \rho _{n} 2^{-R}\}$ , where $\sigma _{\mathrm {GD}}$ is the contraction factor of unquantized gradient descent (GD), $\rho _{n} \geq 1$ is the covering efficiency of the quantizer, and $R$ is the bitrate per problem dimension $n$ . Thus at any $R\geq \log _{2} \rho _{n} /\sigma _{\mathrm {GD}}$ bits, the contraction factor of DQ-GD is the same as that of unquantized GD, i.e., there is no loss due to quantization. We show a converse demonstrating that no algorithm within a certain class can converge faster than $\max \{\sigma _{\mathrm {GD}}, 2^{-R}\}$ . Since quantizers exist with $\rho _{n} \to 1$ as $n \to \infty $ (Rogers, 1963), this means that DQ-GD is asymptotically optimal. In contrast, naively quantized GD where the worker directly quantizes the gradient barely attains $\sigma _{\mathrm {GD}} + \rho _{n}2^{-R}$ . The principle of differential quantization continues to apply to gradient methods with momentum such as Nesterov’s accelerated gradient descent, and Polyak’s heavy ball method. For these algorithms as well, if the rate is above a certain threshold, there is no loss in contraction factor obtained by the differentially quantized algorithm compared to its unquantized counterpart, and furthermore, the differentially quantized heavy ball method attains the optimal contraction achievable among all (even unquantized) gradient methods. Experimental results on least-squares problems validate our theoretical analysis.

Item Type:Article
Related URLs:
URLURL TypeDescription
Kostina, Victoria0000-0002-2406-7440
Hassibi, Babak0000-0002-1375-5838
Additional Information:The authors would like to thank Dr. Himanshu Tyagi for pointing out related works [19], [55]; Dr. Vincent Tan for bringing a known result on the worst-case contraction factor of unquantized GD [75] to their attention; Dr. Victor Kozyakin for a helpful discussion about joint spectral radius; and two anonymous reviewers for detailed comments. This work was supported in part by the National Science Foundation (NSF) under grants CCF-1751356, CCF-1956386, CNS-0932428, CCF-1018927, CCF-1423663 and CCF-1409204, by a grant from Qualcomm Inc., by NASA’s Jet Propulsion Laboratory through the President and Director’s Fund, and by King Abdullah University of Science and Technology. This paper was presented in part at ISIT 2021 [1].
Funding AgencyGrant Number
JPL President and Director's FundUNSPECIFIED
King Abdullah University of Science and TechnologyUNSPECIFIED
Issue or Number:9
Record Number:CaltechAUTHORS:20220909-232702000
Persistent URL:
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:116868
Deposited By: Olivia Warschaw
Deposited On:29 Oct 2022 22:09
Last Modified:01 Nov 2022 17:57

Repository Staff Only: item control page