z-logo
open-access-imgOpen Access
The International Land Model Benchmarking (ILAMB) System: Design, Theory, and Implementation
Author(s) -
Collier Nathan,
Hoffman Forrest M.,
Lawrence David M.,
KeppelAleks Gretchen,
Koven Charles D.,
Riley William J.,
Mu Mingquan,
Randerson James T.
Publication year - 2018
Publication title -
journal of advances in modeling earth systems
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 3.03
H-Index - 58
ISSN - 1942-2466
DOI - 10.1029/2018ms001354
Subject(s) - benchmarking , computer science , suite , set (abstract data type) , forcing (mathematics) , data mining , earth system science , data science , systems engineering , engineering , marketing , business , ecology , archaeology , climatology , biology , history , programming language , geology
The increasing complexity of Earth system models has inspired efforts to quantitatively assess model fidelity through rigorous comparison with best available measurements and observational data products. Earth system models exhibit a high degree of spread in predictions of land biogeochemistry, biogeophysics, and hydrology, which are sensitive to forcing from other model components. Based on insights from prior land model evaluation studies and community workshops, the authors developed an open source model benchmarking software package that generates graphical diagnostics and scores model performance in support of the International Land Model Benchmarking (ILAMB) project. Employing a suite of in situ, remote sensing, and reanalysis data sets, the ILAMB package performs comprehensive model assessment across a wide range of land variables and generates a hierarchical set of web pages containing statistical analyses and figures designed to provide the user insights into strengths and weaknesses of multiple models or model versions. Described here is the benchmarking philosophy and mathematical methodology embodied in the most recent implementation of the ILAMB package. Comparison methods unique to a few specific data sets are presented, and guidelines for configuring an ILAMB analysis and interpreting resulting model performance scores are discussed. ILAMB is being adopted by modeling teams and centers during model development and for model intercomparison projects, and community engagement is sought for extending evaluation metrics and adding new observational data sets to the benchmarking framework.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here