Floating-Point Sparse Matrix-Vector Multiply for FPGAs
- Creators
- deLorimier, Michael
- DeHon, André
- Others:
- Schmit, Herman
- Wilton, Steve
Abstract
Large, high density FPGAs with high local distributed memory bandwidth surpass the peak floating-point performance of high-end, general-purpose processors. Microprocessors do not deliver near their peak floating-point performance on efficient algorithms that use the Sparse Matrix-Vector Multiply (SMVM) kernel. In fact, it is not uncommon for microprocessors to yield only 10–20% of their peak floating-point performance when computing SMVM. We develop and analyze a scalable SMVM implementation on modern FPGAs and show that it can sustain high throughput, near peak, floating-point performance. For benchmark matrices from the Matrix Market Suite we project 1.5 double precision Gflops/FPGA for a single Virtex II 6000-4 and 12 double precision Gflops for 16 Virtex IIs (750Mflops/FPGA).
Additional Information
© 2005 ACM. This work was supported by the Microelectronics Advanced Research Consortium (MARCO) and is part of the efforts of the Gigascale Systems Research Center (GSRC). Thanks to Keith Underwood for valuable editorial comments on this writeup.Additional details
- Eprint ID
- 70921
- Resolver ID
- CaltechAUTHORS:20161006-130031306
- Microelectronics Advanced Research Consortium (MARCO)
- Gigascale Systems Research Center (GSRC)
- Created
-
2016-10-12Created from EPrint's datestamp field
- Updated
-
2021-11-11Created from EPrint's last_modified field