z-logo
Premium
Surviving the deluge of biosimulation data
Author(s) -
Hospital Adam,
Battistini Federica,
Soliva Robert,
Gelpí Josep Lluis,
Orozco Modesto
Publication year - 2019
Publication title -
wiley interdisciplinary reviews: computational molecular science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 5.126
H-Index - 81
eISSN - 1759-0884
pISSN - 1759-0876
DOI - 10.1002/wcms.1449
Subject(s) - computer science , massively parallel , data science , interoperability , computational science , theoretical computer science , data mining , parallel computing , world wide web
New hardware, massively parallel and graphical processing unit‐based computers in particular, has boosted molecular simulations to levels that would be unthinkable just a decade ago. At the classical level, it is now possible to perform atomistic simulations with systems containing over 10 million atoms and to collect trajectories extending to the millisecond range. Such achievements are moving biosimulations into the mainstream of structural biology research, complementary to the experimental studies. The drawback of this impressive development is the management of data, especially at a time where the inherent value of data is becoming more apparent. In this review, we summarize the main characteristics of (bio)simulation data, how we can store them, how they can be reused for new, unexpected projects, and how they can be transformed to make them FAIR (findable, accessible, interoperable and reusable). This article is categorized under: Molecular and Statistical Mechanics > Molecular Dynamics and Monte‐Carlo Methods Computer and Information Science > Databases and Expert Systems

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here