z-logo
Premium
Numerical p ‐version refinement studies for the regularized stress‐BEM
Author(s) -
Richardson J. D.
Publication year - 2003
Publication title -
international journal for numerical methods in engineering
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.421
H-Index - 168
eISSN - 1097-0207
pISSN - 0029-5981
DOI - 10.1002/nme.853
Subject(s) - finite element method , boundary element method , basis function , mathematics , kernel (algebra) , quadratic equation , basis (linear algebra) , algorithm , numerical integration , boundary (topology) , set (abstract data type) , discretization , polygon mesh , mathematical optimization , computer science , mathematical analysis , geometry , discrete mathematics , physics , thermodynamics , programming language
In the development of the boundary element method (BEM) and the finite element method (FEM) researchers have typically selected similar basis functions. That is, both methods typically employ low‐order interpolations such as piece‐wise linear or piece‐wise quadratic and rely on h ‐version refinement to increase accuracy as required. In the case of the FEM, the decision to use low‐order elements is made for computational efficiency as an attractive compromise between local modeling accuracy and sparseness of the resulting linear system. However, in many BEM formulations, low‐order elements may be the only practical choice given the complexity of using analytic integration formulae in conjunction with special integral interpretations. Unlike their efficient use in the FEM, fine meshes of low‐order elements in the BEM are highly inefficient from a computational standpoint given the dense nature of BEM systems. Moreover, owing to singularities in the kernel functions, the BEM should be expected to benefit more so than the FEM from very high levels of local accuracy. Through the use of regularized algorithms which only require numerical integration, p ‐version refinement in the BEM is easily extended to include any set of basis functions with no significant increase in programming complexity. Numerical results show that by using interpolations as high as 12th and 16th order, one can expect reductions in error by as many as five orders of magnitude over comparable algorithms based on similar system size. For two‐dimensional problems, it is also shown that, for a given level of error, one can expect reductions in system size by an order of magnitude, thus leading to a reduction in computational expense for conventional algorithms by three orders of magnitude. Copyright © 2003 John Wiley & Sons, Ltd.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here