Premium
Improving I/O bandwidth with Cray DVS Client‐side Caching
Author(s) -
Hicks Bryce
Publication year - 2017
Publication title -
concurrency and computation: practice and experience
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.309
H-Index - 67
eISSN - 1532-0634
pISSN - 1532-0626
DOI - 10.1002/cpe.4347
Subject(s) - computer science , cache , latency (audio) , bandwidth (computing) , client side , operating system , benchmark (surveying) , server , throughput , reuse , parallel computing , embedded system , computer network , telecommunications , geodesy , wireless , geography , ecology , biology
Summary Cray's Data Virtualization Service, DVS, is an I/O forwarder providing access to native parallel filesystems and to Cray DataWarp application I/O accelerator at the largest system scales while still maximizing data throughput. This paper introduces DVS Client‐side Caching, a new option for DVS to improve I/O bandwidth, reduce network latency costs, and decrease the load on both DVS servers and backing parallel filesystems. Client‐side caching allows application writes to target local in‐memory cache on compute nodes. This provides low latency and high throughput for write operations. It also allows aggregation of data to be written back to the filesystem so fewer network and parallel filesystem operations are required. Caching also enables applications to reuse data previously read or written without further network overhead. This paper will discuss motivations for this work, detailed design and architecture, acceptable use cases, benchmark testing results, and possibilities for future improvement.