CaltechAUTHORS
  A Caltech Library Service

Automatic heterogeneous quantization of deep neural networks for low-latency inference on the edge for particle detectors

Coelho, Claudionor N., Jr. and Kuusela, Aki and Li, Shan and Zhuang, Hao and Ngadiuba, Jennifer and Aarrestad, Thea Klaeboe and Loncar, Vladimir and Pierini, Maurizio and Pol, Adrian Alan and Summers, Sioni (2021) Automatic heterogeneous quantization of deep neural networks for low-latency inference on the edge for particle detectors. Nature Machine Intelligence, 3 (8). pp. 675-686. ISSN 2522-5839. doi:10.1038/s42256-021-00356-5. https://resolver.caltech.edu/CaltechAUTHORS:20210622-161610091

[img] PDF - Accepted Version
See Usage Policy.

2MB
[img] Image (JPEG) (Extended Data Fig. 1: Model architecture and quantization) - Supplemental Material
See Usage Policy.

524kB
[img] Image (JPEG) (Extended Data Fig. 2: Variance shift) - Supplemental Material
See Usage Policy.

30kB
[img] Image (JPEG) (Extended Data Fig. 3: Layers and quantisers in QKeras) - Supplemental Material
See Usage Policy.

132kB
[img] Image (JPEG) (Extended Data Fig. 4: ROC curves for the models under study) - Supplemental Material
See Usage Policy.

105kB

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20210622-161610091

Abstract

Although the quest for more accurate solutions is pushing deep learning research towards larger and more complex algorithms, edge devices demand efficient inference and therefore reduction in model size, latency and energy consumption. One technique to limit model size is quantization, which implies using fewer bits to represent weights and biases. Such an approach usually results in a decline in performance. Here, we introduce a method for designing optimally heterogeneously quantized versions of deep neural network models for minimum-energy, high-accuracy, nanosecond inference and fully automated deployment on chip. With a per-layer, per-parameter type automatic quantization procedure, sampling from a wide range of quantizers, model energy consumption and size are minimized while high accuracy is maintained. This is crucial for the event selection procedure in proton–proton collisions at the CERN Large Hadron Collider, where resources are strictly limited and a latency of O(1)μs is required. Nanosecond inference and a resource consumption reduced by a factor of 50 when implemented on field-programmable gate array hardware are achieved.


Item Type:Article
Related URLs:
URLURL TypeDescription
https://doi.org/10.1038/s42256-021-00356-5DOIArticle
https://rdcu.be/cm0xnPublisherFree ReadCube access
https://arxiv.org/abs/2006.10159arXivDiscussion Paper
https://doi.org/10.5281/zenodo.3602260DOIData
https://github.com/google/qkerasRelated ItemQKeras library
https://github.com/fastmachinelearning/hls4mlRelated Itemhls4ml library
https://github.com/fastmachinelearning/hls4ml-tutorialRelated ItemTutorial
ORCID:
AuthorORCID
Ngadiuba, Jennifer0000-0002-0055-2935
Aarrestad, Thea Klaeboe0000-0002-7671-243X
Pierini, Maurizio0000-0003-1939-4268
Pol, Adrian Alan0000-0002-9034-0230
Alternate Title:Ultra Low-latency, Low-area Inference Accelerators using Heterogeneous Deep Quantization with QKeras and hls4ml
Additional Information:© The Author(s), under exclusive licence to Springer Nature Limited 2021. Received 23 November 2020; Accepted 06 May 2021; Published 21 June 2021. M.P. and S.S. are supported by, and V.L. and A.A.P. are partially supported by, the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant no. 772369). V.L. is supported by Zenseact under the CERN Knowledge Transfer Group. A.A.P. is supported by CEVA under the CERN Knowledge Transfer Group. We acknowledge the Fast Machine Learning collective as an open community of multi-domain experts and collaborators. This community was important for the development of this project. Data availability: The data used in this study are openly available at Zenodo from https://doi.org/10.5281/zenodo.3602260. Code availability: The QKeras library, which also includes AutoQKeras and QTools, is available from https://github.com/google/qkeras (the work presented here uses QKeras version 0.7.4). Examples on how to run the library are available in the notebook subdirectory. The hls4ml library is available at https://github.com/fastmachinelearning/hls4ml and all versions ≥0.2.1 support QKeras models (the work presented here is based on version 0.2.1). For examples on how to use QKeras models in hls4ml, the notebook part4_quantization at https://github.com/fastmachinelearning/hls4ml-tutorial serves as a general introduction. Author Contributions: C.N.C., A.K., S.L. and H.Z. conceived and designed the QKeras, AutoQKeras and QTools software libraries. T.A., V.L., M.P., A.A.P., S.S. and J.N. designed and implemented support for QKeras in hls4ml. S.S. conducted the experiments. T.A., A.A.P. and S.S. wrote the manuscript. The authors declare no competing interests. Peer review information: Nature Machine Intelligence thanks Jose Nunez-Yanez, Stylianos Venieris and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.
Funders:
Funding AgencyGrant Number
European Research Council (ERC)772369
CERNUNSPECIFIED
Subject Keywords:Computer science; Experimental particle physics; Software
Issue or Number:8
DOI:10.1038/s42256-021-00356-5
Record Number:CaltechAUTHORS:20210622-161610091
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:20210622-161610091
Official Citation:Coelho, C.N., Kuusela, A., Li, S. et al. Automatic heterogeneous quantization of deep neural networks for low-latency inference on the edge for particle detectors. Nat Mach Intell 3, 675–686 (2021). https://doi.org/10.1038/s42256-021-00356-5
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:109525
Collection:CaltechAUTHORS
Deposited By: Tony Diaz
Deposited On:23 Jun 2021 19:36
Last Modified:12 Aug 2021 20:45

Repository Staff Only: item control page