z-logo
open-access-imgOpen Access
Parallel computing and quantum simulations/011
Author(s) -
B. J. Alder
Publication year - 1998
Language(s) - English
Resource type - Reports
DOI - 10.2172/292439
Subject(s) - speedup , parallel computing , computer science , quantum , quantum computer , computational science , theoretical computer science , algorithm , physics , quantum mechanics
Our goal was to investigate the suitability of parallel supercomputer architectures for Quantum Monte Carlo (QMC). Because QMC allows one to study the properties of ions and electrons in a solid, it has important applications to condensed matter physics, chemistry, and materials science. research plan was to Our specific 1. Adapt quantum simulation codes which were highly optimized for vector supercomputers to run on the Intel Hypercube and Thinking Machines CM--5. 2. Identify architectural bottlenecks in communication, floating point computation, and node memory. Determine scalability with number of nodes. 3. Identify algorithmic changes required to take advantage of current and prospective architectures. We have made significant progress towards these goals. We explored implementations of the p4 parallel programming system and the Message Passing Interface (MPI) libraries to run ``world-line`` and ``determinant`` QMC and Molecular Dynamics simulations on both workstation clusters (HP, Spare, AIX, Linux) and massively parallel supercomputers (Intel iPSC1860, Meiko CS-2, BM SP-X, Intel Paragon). We addressed issues of the efficiency of parallelization as a function of distribution of the problem over the nodes and the length scale of the interactions between particles. Both choices influence he frequency of inter-node communication and the size of messages passed. We found that using the message-passing paradigm on an appropriate machine (e.g., the ntel iPSC/860) an essentially linear speedup could be obtained

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here