The Automated Model Benchmarking (AMBER) framework
We evaluate the ability of CLASSIC to reproduce land surface processes by comparing model outputs against quasi-observational data sets derived from remote sensing products, eddy covariance flux tower measurements (FLUXNET2015), and streamflow measurements. To summarize model performance across different statistical metrics we employ a skill score system that was originally developed by the International Land Model Benchmarking (ILAMB) framework. To tailor the approach to our needs, we have implemented ILAMB’s skill score system in an R-package called Automated Model Benchmarking (AMBER) framework.
AMBER evaluates the performance of a model through scores that range from zero to one, where increasing values imply better performance. These scores are computed for each variable in five steps: (1) computation of a statistical metric, (2) nondimensionalization, (3) conversion to unit interval, (4) spatial integration, and (5) averaging scores computed from different statistical metrics. The latter includes the bias, root-mean-square error, phase shift, interannual variability, and spatial distribution.
Site-level benchmarking with eddy covariance flux tower sites
Meteorological and initial conditions files for select EC flux tower sites have been processed for model benchmarking and use by CLASSIC users. Example outputs from the most recent release of CLASSIC for these sites are also provided.
The following sites are included in the site-level benchmarking package. Most sites are from the FLUXNET2015 data release. If you plan on using these sites in a paper, please follow the FLUXNET2015 data policy.