z-logo
open-access-imgOpen Access
Can Multithreaded Programming Save Massively Parallel Computing?
Author(s) -
Charles E. Leiserson
Publication year - 1996
Language(s) - English
DOI - 10.1109/ipps.1996.10001
Massively parallel computing has taken a turn for the worse. MPP (massively parallel processor) companies have generally been doing poorly in the marketplace. The additional time to design and deliver MPP systems puts them a generation behind the latest small-scale microprocessor and SMP systems. Truly large machines have mean times to failure measured in days, limiting their ability to provide reliable computing platforms for longrunning computations. Software for MPP’s is arcane, and porting a serial code from a conventional workstation to an MPP is a major chore, if not a research project. Is massively parallel computing doomed? Does anybody care? We should care! Massively parallel computing is the only way to solve society’s most computationally intensive problems. In the last ten years, MPP’s have shown scientists and engineers from many disciplines that important problems they had previously considered beyond their reach are, in fact, solvable. The rapid advancement of electronics, automobile, and pharmaceutical designs demands ever-higher performance from simulation and analysis tools. The computational power needed for data mining and decision analysis is increasing at a rapid rate. The burgeoning popularity of the Internet is now making it possible to deliver high-performance computing services to millions, if networks and software can meet the challenge. Algorithmic multithreaded programming, such as provided by the Cilk system being developed at MIT and the University of Texas at Austin, offers the hope of allowing massively parallel computing to fulfill its promise, even if conventional MPP’s themselves fall by the technology wayside. Algorithmic multithreaded languages provide high-level parallel abstractions for system resources-such as processors, memory, and files-thereby allowing the runtime system to map these abstract resources onto available physical resources dynamically, while providing solid guarantees of high performance. As a consequence, a program can execute adaptively and tolerate faults in a changeable computing environment, such as the clusters of SMP workstations that appear to be the next high-performance fad. Moreover, a multithreaded program can “scale down” to run on a single processor with the same performance as serial C code, thereby removing a major barrier between parallel and serial programming. Significant problems remain before multithreading can replace the existing base of parallel software, however. The most pressing appears to be the problem of duplicating the successes of data parallelism and message passing for problems that require tight and frequent synchronization. In addition, multithreading will demand stronger support from architectures and operating systems for low-latency interrupts and low-latency inter-processor communication. Proceedings of the 10th International Parallel Processing Symposium (IPPS '96) 1063-7133/96 $10.00 © 1996 IEEE

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom