z-logo
Premium
A methodology for high performance computation of fully inhomogeneous turbulent flows
Author(s) -
You Donghyun,
Wang Meng,
Mittal Rajat
Publication year - 2006
Publication title -
international journal for numerical methods in fluids
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.938
H-Index - 112
eISSN - 1097-0363
pISSN - 0271-2091
DOI - 10.1002/fld.1314
Subject(s) - multigrid method , distributed memory , computation , solver , large eddy simulation , computer science , parallel computing , turbulence , computational fluid dynamics , computational science , mathematics , domain decomposition methods , shared memory , mathematical optimization , algorithm , mechanics , physics , mathematical analysis , finite element method , partial differential equation , thermodynamics
Abstract A large‐eddy simulation methodology for high performance parallel computation of statistically fully inhomogeneous turbulent flows on structured grids is presented. Strategies and algorithms to improve the memory efficiency as well as the parallel performance of the subgrid‐scale model, the factored scheme, and the Poisson solver on shared‐memory parallel platforms are proposed and evaluated. A novel combination of one‐dimensional red–black/line Gauss–Seidel and two‐dimensional red–black/line Gauss–Seidel methods is shown to provide high efficiency and performance for multigrid relaxation of the Poisson equation. Parallel speedups are measured on various shared‐distributed memory systems. Validations of the code are performed in large‐eddy simulations of turbulent flows through a straight channel and a square duct. Results obtained from the present solver employing a Lagrangian dynamic subgrid‐scale model show good agreements with other available data. The capability of the code for more complex flows is assessed by performing a large‐eddy simulation of the tip‐leakage flow in a linear cascade. Copyright © 2006 John Wiley & Sons, Ltd.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here