Berkeley UPC - Unified Parallel C(A joint project of LBNL and UC Berkeley)
The UPC Language
Unified Parallel C (UPC) is an extension of the C programming language designed for high performance computing on large-scale parallel machines.The language provides a uniform programming model for both shared and distributed memory hardware. The programmer is presented with a single shared, partitioned address space, where variables may be directly read and written by any processor, but each variable is physically associated with a single processor. UPC uses a Single Program Multiple Data (SPMD) model of computation in which the amount of parallelism is fixed at program startup time, typically with a single thread of execution per processor.
In order to express parallelism, UPC extends ISO C 99 with the following constructs:
The UPC language evolved from experiences with three other earlier languages that proposed parallel extensions to ISO C 99: AC , Split-C, and Parallel C Preprocessor (PCP). UPC is not a superset of these three languages, but rather an attempt to distill the best characteristics of each. UPC combines the programmability advantages of the shared memory programming paradigm and the control over data layout and performance of the message passing programming paradigm.
Our work at UC Berkeley/LBNL
The goal of the Berkeley UPC compiler group is to develop a portable, high performance implementation of UPC for large-scale multiprocessors, PC clusters, and clusters of shared memory multiprocessors. We are actively developing an open-source UPC compiler suite whose goals are portability and high-performance.
There are several major components to this effort:
In an effort to make our code useful to other projects, we have separated the UPC-specific parts our runtime layer from the networking logic. If you are implementing your own global address space language (or otherwise need a low-level, portable networking library), you should look at our GASNet library, which currently runs over a wide variety of high-performance networks (as well as over any MPI 1.1 implementation), and which is also being used as the networking layer for the Titanium language (a high-performance parallel dialect of Java).
We are implementing optimizations for the common special cases in UPC where a programmer uses either the default, cyclic block layout for distributed arrays, or a shared array with 'indefinite' blocksize (i.e., existing entirely on one processor). We are also examining optimizations based on avoiding the overhead of shared pointer manipulation when accesses are known to be local.
Some of the research findings from these areas of work can be found on our publications page.
Katherine Yelick - Advisor, PI Dan Bonachea - GASNet, Runtime
Chang-Seo Park - Active Testing (Thrille)
Paul Hargrove - GASNet, Runtime
Alice Koniges - Applications, Outreach
Costin Iancu - Compiler
Yili Zheng - GASNet, Runtime
General contact info
Christian Bell, Filip Blagojevic, Wei Chen, Jason Duell, Parry Husbands, Seung-Jai Min, Rajesh Nishtala, Mike Welcome
The Berkeley UPC project is funded by the DOE under based program funding from the Office of Science,
through the DOE PModels project, and from the Department of Defense.
||This page last modified on Tuesday, 30-Apr-2013 02:50:32 PDT|