Thornley, John (1996) A Parallel Programming Model with Sequential Semantics. California Institute of Technology . (Unpublished) http://resolver.caltech.edu/CaltechCSTR:1996.cs-tr-96-12
See Usage Policy.
Other (Adobe PDF (1.35MB))
See Usage Policy.
Use this Persistent URL to link to this item: http://resolver.caltech.edu/CaltechCSTR:1996.cs-tr-96-12
Parallel programming is more difficult than sequential programming in part because of the complexity of reasoning, testing, and debugging in the context of concurrency. In this thesis, we present and investigate a parallel programming model that provides direct control of parallelism in a notation with sequential semantics. Our model consists of a standard sequential imperative programming notation extended with the following three pragmas: (1) The parallelizable sequence of statements pragma indicates that a sequence of statements can be executed as parallel threads; (2) The parallelizable for-loop statement pragma indicates that the iterations of a for-loop statement can be executed as parallel threads; (3) The single- assignment type pragma indicates that variables of a given type are assigned at most once and that ordinary assignment and evaluation operations can be used as implicit communication and synchronization operations between parallel threads. In our model, a parallel program is simply an equivalent sequential program with added pragmas. The placement of the pragmas is subject to a small set of restrictions that ensure the equivalence of the parallel and sequential semantics. We prove that if standard sequential execution of a program (by ignoring the pragmas) satisfies a given specification and the pragmas are used correctly, parallel execution of the program (as directed by the pragmas) is guaranteed to satisfy the same specification. Our model allows parallel programs to be developed using sequential reasoning, testing, and debugging techniques, prior to parallel execution for performance. Since parallelism is specified directly, sophisticated analysis and compilation techniques are not required to extract parallelism from programs. However, it is important that parallel performance issues such as granularity, load balancing, and locality be considered throughout algorithm and program development. We describe a series of programming experiments performed on up to 32 processors of a shared-memory multiprocessor system. These experiments indicate that for a wide range of problems: (1) Our model can express sophisticated parallel algorithms with significantly less complication than traditional explicit parallel programming models; (2) Parallel programs in our model execute as efficiently as sequential programs on one processor and deliver good speedups on multiple processors; (3) Program development with our model is less difficult than with traditional explicit parallel programming models because reasoning, testing, and debugging are performed using sequential methods. We believe that our model provides the basis of the method of choice for a large number of moderate-scale, medium-grained parallel programming applications.
|Item Type:||Report or Paper (Technical Report)|
|Group:||Computer Science Technical Reports|
|Usage Policy:||You are granted permission for individual, educational, research and non-commercial reproduction, distribution, display and performance of this work in any format.|
|Deposited By:||Imported from CaltechCSTR|
|Deposited On:||25 Apr 2001|
|Last Modified:||26 Dec 2012 14:05|
Repository Staff Only: item control page