How to improve soil modeling to maximize climate and farm benefits

Credit: Zoran Zeremski/iStock

Efforts to curb agricultural greenhouse gas emissions and increase soil carbon storage are picking up steam to help mitigate the impacts of climate change. To have maximum impact, we need ways to reliably quantify their outcomes. 

Direct measurement of the impacts of climate smart agricultural practices are imperative to instill confidence, but they aren’t forward-looking and can’t be done everywhere. Enter the use of process-based models (e.g., DNDC, DayCent) to estimate these changes more rapidly and across broader areas.  

Process-based models are useful tools, but they have limitations, and many researchers and practitioners remain uncertain about how to use them most effectively. Yet, while skepticism in their accuracy remains a challenge, interest in them is skyrocketing, making it even more important that the community work together on their improvement.  

A new report led by Environmental Defense Fund digs into how carbon project developers and companies are using process-based models across projects to explore current challenges, identify knowledge gaps and recommend improvements.  

The current process-based modeling landscape 

Process-based models are computer simulations that mimic ecosystems and the interactions between their biological, physical and chemical processes. Users can predict how agricultural ecosystems might change over time and evaluate the effects of management practices such as reduced tillage and cover cropping, while accounting for variables like soil type, climate and cropping systems.  

These models are extremely useful for predicting farm practice outcomes and comparing impacts of different hypothetical scenarios, but they also have shortcomings.  

The models are simplified representations of reality. For these models to be manageable, scientists must boil down the complexity of ecosystems to a small set of equations, which means they don’t work equally well in all situations. Models are also only as good as the data used to build and test them. If the data a model uses are flawed or biased, model outcomes will be, too.   

Process-based models are also difficult to use and interpret without experience running them. The few experts who have this experience disagree over how best to apply models for real-world applications like carbon markets.  

Pathways to improvement 

One of the biggest hinderances to improvement is a lack of transparency. We enlisted a group of experts to examine current projects and protocols to better understand how they function and identify shortcomings. The report lays out, step-by-step, how models are applied in GHG reduction and removal projects so that anyone who is interested in or whose work touches this topic can join the conversation and understand current challenges. 

The report finds that standard-setting bodies and project developers using process-based models should focus on three key areas: 

  1. Ensure data used to validate the model reflects the project. Model validation helps ensure that process-based models are used reliably. Data need to be aligned not only with the biophysical conditions of a project, such as soil and crop type, but also with the project’s duration and spatial scale for model accuracy. 
  2. Maintain consistency in the modeling workflow across the entire project. Once a model is validated for use in a project, protocols require the same model version to be used for the rest of the project. Accurately assessing change requires consistency in how the model is set up, the data that are used and how the model outputs are processed. 
  3. Accounting for space and time when estimating project-level uncertainty. Where and when data are collected matters for estimating uncertainty. Accounting for space and time is important for accurate estimates of uncertainty at the project level. This also helps to ensure conservative greenhouse gas mitigation claims. 

Advancing modeling standards for impactful climate action 

Models shouldn’t be static. We must improve them over time if we want to get the most out of them. To achieve this, we need expertise from the process-modeling community to fill knowledge gaps. Standard-setting bodies need to consider recommendations discussed in the report as well as ongoing academic and industry research as they update their protocols. Government, academia, nongovernmental organizations and the private sector should work collaboratively to collect and share high-quality data for model use.

The United States Department of Agriculture’s soil carbon monitoring efforts funded by the Inflation Reduction Act provide an invaluable opportunity to acquire such data. Data collected from carbon markets and supply chain programs can also be leveraged for model benchmarking and improvement.  

Continually improving process-based model use and interpretation will result in more accurate forecasts of greenhouse gas emissions and reductions. These updates are critical to both equipping farmers to make more informed soil and crop management decisions and enabling effective policy and projects. The result will be better outcomes for agricultural systems and the climate alike.  

This entry was posted in Agriculture, Innovation, News, Science. Bookmark the permalink. Both comments and trackbacks are currently closed.

One Comment

  1. Posted May 3, 2024 at 7:56 am | Permalink

    Interesting. But I am not completely agreeing …. “Models are also only as good as the data used to build and test them” – No. The real shortoming is the model parameterisation, which also includes the provision of the right input data (drivers as well as status variables at model initialisation) when executing them. This is the real (and hard) problem, as all parameters and drivers are spatially variable when applied in a NBS context. Hence we need to combine those process models with physically-based radiative transfer models to directly simulate (location by location) the spectral signature of the system/crop. In this way, remote sensing (EO) observations can be used to calibrate the models for each and every location during the model application phase. At Mantle-Labs we call this approach the “Digital Twin” … The digital twin is a computer simulation model which does not only simulate the biological/chemical and physical processes in the canopy/soil (DayCent, RothC, etc.), but which also simulates the resulting spectral signature (e.g. “color”, when referring to the human visual system). As in weather simulation, a model cannot be simply “calibrated” using a couple of years of experimental data. What is needed is a continuous control of its behavior over space and time. In the case of climatology/meteorology, the models are the equations and the “controls” are the weather stations. In the context of process models for NBS it is hard to see for me how this can be achieved without remote sensing and the described data assimilation.

    Best regards
    Clement ATZBERGER
    Head of Research – Mantle Labs Ltd
    London, UK