Collaborative Research: PPoSS: Planning: Unifying Software and Hardware to Achieve Performant and Scalable Zero-cost Parallelism in the Heterogeneous Future
NSF Award CCF-2028958; $41,627 (Collaborative total: $1.2M); October 2020 through September 2021. This project is a collaborative effort with Peter Dinda, Simone Campanoni, and Nikos Hardavellas at Northwestern University, and Umut acar at Carnegie Mellon University.
Exploiting parallelism is essential to making full use of computer systems, and thus is intrinsic to most applications. Building parallel programs that can truly achieve the performance the hardware is capable of is extremely challenging even for experts. It requires a firm grasp of concepts that range from the very highest level to the very lowest, and that range is rapidly expanding. This project approaches this challenge along two lines, “theory down” and “architecture up”. The first strives to simplify parallel programming through languages and algorithms. The second line strives to accelerate parallel programs through compilers, operating systems, and the hardware. The project’s novelty is to bridge these two lines, which are usually treated quite distinctly by the research community. The unified team of researchers is addressing a specific subproblem, scheduling, and then determining how to expand out from it. The project’s impact is in making it possible for ordinary programmers to program future parallel systems in a very high-level way, yet achieve the performance possible on the machine.
The project studies an “intermediate representation out” approach to making high-level parallel abstractions implementable so that they can be used with zero cost. A core idea is to expand the compiler’s intermediate representation such that it can capture both high-level parallel concepts and low-level machine and operating system structures, thus allowing full stack optimization. This planning project will flesh out this concept and set the stage for a larger scale effort in the future.