z-logo
open-access-imgOpen Access
Deep scientific computing requires deep data
Author(s) -
William Kramer,
Arie Shoshani,
D. Agarwal,
Brent Draney,
Guojun Jin,
Gregory F. Butler,
John Hules
Publication year - 2004
Publication title -
ibm journal of research and development
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.47
H-Index - 95
eISSN - 2151-8556
pISSN - 0018-8646
DOI - 10.1147/rd.482.0209
Subject(s) - terabyte , petabyte , computer science , big data , data science , computer data storage , database , data mining , operating system
Increasingly, scientific advances require the fusion of large amounts of complex data with extraordinary amounts of computational power. The problems of deep science demand deep computing and deep storage resources. In addition to teraflop-range computing engines with their own local storage, facilities must provide large data repositories of the order of 10-100 petabytes, and networking to allow the movement of multi-terabyte files in a timely and secure manner. This paper examines such problems and identifies associated challenges. The paper discusses some of the storage systems and data management methods that are needed for computing facilities to address the challenges and describes some ongoing improvements.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom