Premium
Scalable parallel scheme for sampling of Gaussian random fields over very large domains
Author(s) -
Carvalho Paludo L.,
Bouvier V.,
Cottereau R.
Publication year - 2018
Publication title -
international journal for numerical methods in engineering
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.421
H-Index - 168
eISSN - 1097-0207
pISSN - 0029-5981
DOI - 10.1002/nme.5981
Subject(s) - scalability , realization (probability) , gaussian , computer science , random field , algorithm , scheme (mathematics) , sampling (signal processing) , domain (mathematical analysis) , degrees of freedom (physics and chemistry) , cube (algebra) , gaussian random field , theoretical computer science , parallel computing , mathematics , topology (electrical circuits) , gaussian process , statistics , geometry , physics , mathematical analysis , filter (signal processing) , quantum mechanics , database , combinatorics , computer vision
Summary This paper proposes a new scheme for the generation of Gaussian random fields over large domains (domain size much larger than the correlation length). The scheme decomposes the simulation domain into overlapping subdomains and essentially generates independent random fields over each of them before merging them on the overlaps. It is naturally suited for simulation over clusters of computers. With this approach, the number of operations for each processor depends only on the number of local degrees of freedom and not on the total number over all processors. Hence, weak scalability is perfectly met. This paper describes the general scheme and introduces two error estimates for comparison with classical sampling schemes. Improvements in terms of scalability are demonstrated both theoretically and through numerical examples. The behavior in the overlap is studied in detail. Simulations using the localized approach were performed in up to 512 processors and allowed to generate in 41 seconds a realization of a random field over a cube of side 300 correlation lengths (close to 2‐billion sampling points), much more efficiently and rapidly than with classical methods.