Premium
Accelerating Value‐at‐Risk estimation on highly parallel architectures
Author(s) -
Dixon M. F.,
Chong J.,
Keutzer K.
Publication year - 2011
Publication title -
concurrency and computation: practice and experience
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.309
H-Index - 67
eISSN - 1532-0634
pISSN - 1532-0626
DOI - 10.1002/cpe.1790
Subject(s) - speedup , computer science , multi core processor , value at risk , analytics , monte carlo method , computation , parallel computing , graphics , graphics processing unit , risk management , computational finance , finance , data science , algorithm , operating system , statistics , mathematics , economics
SUMMARY Values of portfolios in modern financial markets may change precipitously with changing market conditions. The utility of financial risk management tools is dependent on whether they can estimate Value‐at‐Risk (VaR) of portfolios on‐demand when key decisions need to be made. However, VaR estimation of portfolios uses the Monte Carlo method, which is a computationally intensive method often run as an overnight batch job. With the proliferation of highly parallel computing platforms such as multicore CPUs and manycore graphics processing units (GPUs), teraFLOPS of computation capability is now available on a desktop computer, enabling the VaR of large portfolios with thousands of risk factors to be computed within only a fraction of a second. Achieving such performance in practice requires the assimilation of expertise in the following three areas: (i) application domain; (ii) statistical analytics; and (iii) parallel computing. This paper demonstrates that these areas of expertise inform optimization perspectives that, when combined, lead to 127×speedup on our CPU‐based implementation and 538×speedup on our GPU‐based implementation. Copyright © 2011 John Wiley & Sons, Ltd.