Published June 1999 | Version public
Book Section - Chapter

Microservers: a new memory semantics for massively parallel computing

Abstract

The semantics of memory-a large state which can only be read or changed a small piece at a time-has remained virtually untouched since von Neumann, and its effects-latency and bandwidth-have proved to be the major stumbling block for high performance computing. This paper suggests a new model, termed "microservers," that exploits "Processing-In- Memory" VLSI technology, and that can reduce latency and memory traffic, increase inherent opportunities for concurrency, and support a variety of highly concurrent programming paradigms. Application of this model is then discussed in the framework of several on-going supercomputing programs, particularly the HTMT petaflops project.

Additional Information

© 1999 ACM. This work was sponsored in part by the Jet Propulsion Laboratory and the California Institute of Technology through the REE and HTMT projects, with support from NSF Grant MIP 9503682, NASA, DARPA, and the NSA.

Additional details

Identifiers

Eprint ID
69555
Resolver ID
CaltechAUTHORS:20160810-163557773

Funding

NASA/JPL/Caltech
NSF
MIP 9503682
NASA
Defense Advanced Research Projects Agency (DARPA)
National Security Agency

Dates

Created
2016-08-11
Created from EPrint's datestamp field
Updated
2021-11-11
Created from EPrint's last_modified field