Using data to attribute causal links in climate, and our book

This post was jointly authored with Prof Sebastian Reich, University of Potsdam. An edited version will appear on the Cambridge University Press blog page.

Computer generated forecasts play an important role in our daily lives, for example, predicting weather or economies. Forecasts combine computational models of relevant dynamical processes with measured data. Errors are always present through incomplete observations plus imperfections in the model, so forecasts must be constantly calibrated with new data. In the geosciences, this is called data assimilation. The introduction of probabilistic approaches to forecasting and data assimilation has been a major breakthrough in the field, facilitated by powerful supercomputers. Instead of producing a best estimate, probabilistic forecast methods provide a range of possible future scenarios with expected probabilities in light of measured data and using models. Even more recently, we have seen the application of data assimilation techniques beyond forecasting scenarios. For example, data assimilation is being applied to paleoclimate reconstructions, to evaluate different modelling approaches, and to test for causality.

Consider, for example, a meteorologist trying to improve a subgrid scheme for tropical cloud convection in an atmosphere model. Does the model represent reality better with the new cloud scheme or the old one? As well as studying the qualitative behaviour of the model, one can make a quantitative comparison by using observational data. Given a model, and knowledge of the observation error, one can compute a quantity called the “model evidence”, which measures how likely the model is to produce a state that is close to the observed data. Model evidences for two different models can be compared using the same set of observations. The relative size of the model evidence values can then be measured in a statistical hypothesis test.

This all sounds like a great methodology for model validation. The difficulty is that computing model evidence requires integration over the entire model state space. For an operational weather forecast model, this can mean an integral over millions of dimensions. In a paper preprint posted recently on the ArXiV (http://arxiv.org/abs/1605.01526), Carrassi and coauthors described a framework for estimating model evidence using data assimilation techniques. Data assimilation solves the following problem: given a forecast model plus observational data, what is the most likely state of the observed system. In ensemble data assimilation, uncertainty about the modelled system is represented by an ensemble of model states, representing spread of likely states in the model. A mean forecast can be calculated by computing averages, and the uncertainty in the forecast can be measured by computing variances. As time progresses, ensemble members are updated according to the model, and then corrected according to the observations. As a mathematical discipline, data assimilation forms part of a broader topic of statistical inference tools and inverse problems, but data assimilation concentrates on the challenge dealing with huge models and vast amounts of data whilst needing to deliver a forecast before the moment has passed. It turns out that model evidence estimation can be reformulated to make use of the tools and algorithms of data assimilation. Carrassi and coauthors demonstrate in their paper that this approach can be used in practice, but note that different approximations made in the data assimilation process can lead to differences in the results.

In another recent paper by many of the same authors (Hannart et al, Climatic Change, 2016), this data driven model testing approach was advocated for testing hypotheses of causal attribution of weather and climate-related events. In their words, the challenge of causal attribution is to evaluate “the extent to which a given external climate forcing — such as solar irradiation, greenhouse gas emissions, ozone or aerosol concentrations — has changed the probability of occurrence of an event of interest”. This again requires the computation of two model evidences, one for the null hypothesis where the forcing is not present, and another where it is. The exciting part of this type of calculation is that, although it is very intensive, it makes use of products which are already computed during the operational data assimilation process, which can then be harnessed to solve causal attribution problems.

Model evidence is a very exciting emerging research area in data assimilation, with the potential for big impact on science. It makes use of a suite of data assimilation tools and concepts which are introduced in our book “Probabilistic Forecasting and Bayesian Data Assimilation”. In this book we provide a gentle introduction to the main mathematical concepts, concentrating on finite dimensional systems and discrete time models to avoid technicalities that may be explored later once the basic concepts are understood. We introduce a range of ensemble data assimilation algorithms; the final chapter addresses the challenging topic of model evidence, introducing the mathematical concepts and tools that are used in these recent papers.

Probabilistics Forecasting and Bayesian Data Assimilation

Leave a Reply

Your email address will not be published. Required fields are marked *