Diffusion models of Earth’s Outer Radiation Belt using Stochastic Parameterisations
Space Weather is the name given to the natural variability of the plasma and magnetic field conditions in near-Earth Space. 21st Century technology is increasingly reliant on space-based assets and infrastructure that are vulnerable to extreme space weather events. Due to the sparse nature of in-situ measurements, and the relative infancy of numerical space plasma physics models, we lack the ability to predict the timing and severity of space weather disruptions to either mitigate their effects, or adequately plan for their consequences. In this project, we focus on important improvements to the numerical modelling of the Earth’s Outer Radiation Belt; a highly-variable region of energetic electrons in near-Earth space. In the Outer Van Allen Radiation Belt, electrons are trapped by the Earth’s magnetic field and can be accelerated to a significant fraction of the speed of light. At such high energies, they pose significant hazards to spacecraft hardware. Most importantly for mankind’s reliance on space-based systems, the Outer Radiation Belt encompasses orbital paths that are of great use to society (e.g. geosynchronous orbit, and Medium Earth Orbits that contain global positioning and navigation systems).
A first-principles description of collective electron motion in the Outer Radiation Belt is intractable due to the large volume and length scales involved; the physics of this region is often approximated using a set of diffusion equations in the three-dimensional phase space described by adiabatic invariants. Importantly, the diffusion coefficients in this phase-space vary in time and space. Observations are used to constrain the diffusion coefficients, but current parameterizations are averaged and deterministic (e.g. for a particular level of geomagnetic activity, there is only one value of diffusion coefficient). The key research objective in this project is to quantify and investigate the importance of variability in our descriptions of diffusion in the Outer Radiation Belt. We will apply the method of stochastic parameterizations to electron diffusion equations in order to include, for the first time, the large variability in the wave-particle interactions that drive the Outer Radiation Belt.
Dr Clare Emily Jane Watt
Reconstruction based error detection for robust approximation of partial differential equations
Partial differential equations (PDEs) play an important role in modelling natural phenomena, including atmospheric and ocean flows. As they often lack explicit solutions, numerical methods are frequently deployed to approximate PDEs.
This is a challenging task as these equations often feature complex structures and travelling shocks, making their numerical simulation difficult. In particular, we want to know how well the numerical method is doing in approximating the underlying physical problem. This makes error estimation a crucial component in the field of numerical approximation of PDEs. Error estimation can give the user indications of the performance of the numerical methods. In addition, error estimates can be put to good use with regard to improving the performance of the method.
In this regard, adaptive methods are useful, particularly when it comes to efficiently deploying computational resources throughout the domain of consideration, focusing computational effort to regions of the problem where, say, increased resolution is required. Adaptivity is often implemented through the use of a posteriori error estimates. These are error statements that give us an indication of how the error behaves locally without knowledge of the true solution to the problem.
A posteriori error estimation is a field of research in its own right and a posteriori error estimates are frequently utilised alongside widely-used methods of numerical approximation of partial differential equations, such as Finite Elements and Finite Volume methods. By comparison, Finite Difference methods have not received as much attention with regard to a posteriori error estimation when compared to FE and FV equivalents. This is predominantly due to a lack of a variational structure for the problem.
In this work, we use reconstructions in order to construct efficient and reliable a posteriori error estimates for FD schemes. Reconstructions are post-processors of the computed solution which can be endowed with favourable characteristics, such as optimality – the ability to converge at the same rate as the underlying numerical scheme.
We present a framework for creating reconstructions for classes of well-used FD schemes and we implement a posteriori error estimates produced in this way in a range of PDE problems. In particular, we also use them as drivers for mesh adaptivity. We find that our results compare favorably to well-used estimates present in the literature.
Dr Nikos Katzourakis; Dr Tristan Pryer
Solving the optimisation problem of ensembles of weak-constraint 4DVar with fixed initial conditions
The two problems highlighted above will be addressed as follows. Firstly, we can reformulate the present scheme as an approximation to a so-called particle filter. This particle-filter interpretation allows for a reformulation of the optimisation problem to a simpler one in which the so-called background term is absent, and the model error term becomes dominant. The emphasis of the project will be on finding ways to solve this new optimisation problem that arises when the ensemble of variational problems is viewed as an ensemble of particles in a particle filter. This has two aspects, namely, the optimisation problem itself, and the fact that several similar optimisation problems need to be solved, one for each particle, allowing for exchange of information during the optimisations. We will start with implementing and analysing new methodologies in a simplified model, the 1-dimensional Lorenz 96 model. This will enable the student to become familiar with the problem and the potential solution methods. Then we will consider intermediate complexity models, specifically the shallow-water equation system. This will allow the student to explore the issues that will be encountered in high-dimensional systems, for which numerical efficiency becomes important. Finally, for the PhD project, we will explore how to solve these problems in the ECMWF system.
Prof Peter Jan van Leeuwen
Developing novel methods for early warning of high impact weather in Africa
Farmers in Africa are highly vulnerable to variability in the weather. Robust and timely information on risk can enable farmers to take action to improve yield (Boyd et al, 2013 DOI: 10.1038/nclimate1969 ). Ultimately, access to effective early warning improves global food security. Such information also forms the basis of financial instruments, such as drought insurance (Black et al. 2016 DOI: 10.1175/BAMS-D-16-0148.1). Monitoring weather conditions is, however, difficult in Africa because of the heteorogeneity of the climate, and the sparcity of the ground-observing network.
The study will use a combination of models and observations to address these questions. The observations will include a unique dense rain gauge network covering the whole of Ghana from the 1960s to the present day; a newly developed merged gauge and satellite imagery rainfall product. The models will include the 29 CMIP5 models, for which the standard climate change scenarios are available, along with high resolution models being developed for the future climate for Africa programme. A range of methods will be used to address these questions, including advanced statistical methods for analysing the geostatistics of extremes, methods for analysing causality and statistical significance of simulated signals of change in model ensembles. The areas of mathematical advance will include Inferring risk of adverse extreme events based on relatively small ensembles from Block Maxima and Peaks Over Threshold approaches in Extreme Value theory. In this univariate setting, the aim is to explore the maximum likelihood estimation further by working on the extreme value condition, which characterises max-domains of attraction, rather than simply assuming that the limiting extreme value distribution offers an exact fit to the sample maxima or to the sample exceedances.The work plan will include the following tasks:
– Modelling of the probability distributions of gauge observations and TAMSAT V3, with a focus on return periods for extreme rainfall and assessment of uncertainties in return periods for extreme rainfall
– Extension of analyses of extremes to modelled datasets for the historical period, including the new ultra-high resolution (~4km horizontal resolution) CP4Africa and other high resolution data as well as CMIP5
– Analysis of future climate projections for the climate models described aboveMulti-variate analyses of extremes temperature and precipitation
The proposed project will significantly enhance the state-of-the-art by enabling a better understanding to be gained of the characteristic features in small ensembles associated with precipitation data. The project will also underpin significant operational advance with TAMSAT (www.tamsat.org.uk), enabling us to provide robust probabilistic information on the extremeness in high levels of rainfall via estimation of the extreme value index. The developments to TAMSAT climate services will lead to more accurate risk assessments for events that are strongly modulated by extremes, such as flash floods.
Dr Emily Catherine Louise Black
Climate Change and Infectious Diseases
Studying the interaction between Climate Change and Infectious Diseases is an emerging research area. Infectious diseases are caused by a variety of different agents, including bacteria, viruses and parasites. A changing climate is likely to have significant impacts on some infectious agents, and less on others. The most obvious example of an effect is in a vector-borne disease such as malaria, where the population of the vector is significantly affected by the climate. However, there are many different pathways to effects, such as increased risk of diseases such as cholera due to overcrowding in urban areas. An excellent summary of the area is provided by the World Health Organisation (http://www.who.int/globalchange/environment/en/chapter6.pdf). To quote from this document, there are three categories of research into the linkages between climatic conditions and infectious disease transmission:
– The first examines evidence from the recent past of associations between climate variability and infectious disease occurrence.
– The second looks at early indicators of already-emerging infectious disease impacts of long term climate change.
– The third uses the above evidence to create predictive models to estimate the future burden of infectious disease under projected climate change scenarios.
This project will develop statistical methodology for addressing challenges in this research area and apply this methodology to real data. The project will explore three different topics, which are linked by the title of the PhD, but also by the methods that underpin them.
Dr Andrew Meade
Manuel Santos Gutierrez
Operator Methods and Response in Climate Dynamics
Understanding how a physical system responds to external stimuli is fundamental in every area of science. The motivating element of this project is Earth’s climate, a complex dynamical system subject to external and anthropogenic forces that will cause changes in its natural evolution. While there are multiple studies on this regard, there is still work to be done to understand the dynamical mechanism that can lead to smooth or abrupt changes in our planet’s climate. To this end, we will use the diverse theories of response in statistical physics in order to predict the change in the average state of a system undergoing perturbations. These theories are solidly founded in the transfer operator theory of dynamical systems, although there is still a long way from making them applicable in the study of the Earth system. The general goal of this thesis is to bridge the theoretical results from dynamical systems with the study of physically relevant Earth-like systems.
Dr Valerio Lucarini
Non-linear transient adjustment of the Southern Ocean to wind changes
Despite it is remote in location, the Southern Ocean (SO) has a profound impact on our climate as it absorbs nearly half of the carbon dioxide released by human activities and more than 2/3 of the associated heat trapped in atmosphere. The powerful current, which flows eastward around Antarctica, is primarily driven by the surface winds. Observations show that the Southern Hemisphere winds have strengthened and shifted toward the South Pole since the late 1970s: it is therefore natural to ask how the SO adjusts to such changes and to wonder whether the SO will continue absorbing heat and CO2 at the same rate. Many studies since Hallberg and Gnanadesikan (2006) have investigated the SO response to wind changes: they reveal that the SO is rather insensitive to wind changes and that accounting for eddy-mean flow interaction is critical to understand this phenomenon. Importantly, however, these studies have only dealt with the equilibrium response.
However, recent studies have highlighted that the SO response to wind changes is a superimposition of multiple timescales ranging from one month to 10+ years. That is, the longest adjustment timescales of the SO are similar to that of the observed wind changes. To understand past and future trends on the decadal time relevant to policy makers, it is therefore essential to consider the transient adjustment of the SO to wind changes, not just its equilibrium response. Only a few studies have addressed the transient problem and they only considered simple linearized models of the adjustment. Although useful, the linear limit does not obviously apply.
The main goal of this project is to address this gap in knowledge by investigating the non-linear transient adjustment of the SO ocean to wind changes. The approach will combine theoretical developments and analysis of eddy-resolving numerical simulations of the Southern Ocean circulation. A key aim is to develop a physically based mathematical model of the Southern Ocean adjustment to wind changes, including nonlinear eddy-mean flow interactions.
Dr David Ferreira
Causal approaches to climate variability and change
The objective of this project is to determine how errors in climate models are related to the discrepancy (or spread) in model projections of climate change, specifically for the atmospheric circulation aspects of climate change, such as jet stream dynamics, which play a crucial role in impactful climatic extremes such as droughts and persistent heat waves. This research is needed to reduce the uncertainty in these aspects of climate change, which is currently very substantial.
The essential scientific challenge in meeting this objective is that the concept of model ‘error’ only applies to observable aspects of climate, where the truth is known. For atmospheric circulation aspects of climate change, the observed record is dominated by year-to-year natural variability, which arises from the chaotic nature of the climate system, and is even manifest on multi-decadal timescales. Thus, the only model errors that are detectable are those concerning the shorter timescale behaviour of the natural variability, which is not related in any obvious way to the response to climate change. The approach used here to achieve the objective is to use large ensembles of simulations from different climate models to sample a large variety of model error, in order to determine the relationship between model error and the discrepancy of model projections of climate change.
Previous studies using this sort of approach have identified correlations, but it is well known that correlations do not necessarily reflect causality, and many published studies have subsequently been shown to be wrong. Thus, this project will use recently developed causal network methods, which were primarily developed for Artificial Intelligence. The climate applications so far address physical hypotheses, using observations. In this project, the concept will be extended for the first time to climate model error. By controlling for confounding factors in a systematic manner, causality can be inferred and it then becomes possible to relate model error to the uncertainty in climate projections.
The causal network framework allows marginal probabilities to be factorized into products of conditional probabilities. This provides a much more robust and parsimonious method of model evaluation using the limited observational record than is possible with correlations, thus facilitating the identification of model error. At the same time, the possibility of considering counter-factual outcomes allows a causal determination of the effect of model error on future climate projections.
The major part of the research challenge will be developing the appropriate causal networks to uncover the aspects of model error that are relevant for particular target regions, where the uncertainty in the atmospheric circulation response to global warming is particularly large. The project will focus on the atmospheric storm track regions of the North Pacific, the North Atlantic, and the Southern Ocean. The physical knowledge needed to develop the networks is available in the literature on mechanisms of climate variability and seasonal climate prediction.
The novel physical sciences content of this project lies in the application of Bayesian statistical methods, which are designed for discrete systems, to a physical system that is continuous in both space and time. Thus, the network will need to be optimized to extract maximum information from the available data. Also, to our knowledge these methods have yet to be applied to model error. The non-stationary nature of the climate system (not least because of its seasonal cycle) is a further mathematical challenge, because causal network approaches are mainly used for stationary systems.
Prof Theodore Gordon Shepherd
Working with weather & climate variability in power systems planning
In the face of climate change, considerable efforts are being undertaken to reduce carbon emissions. One of the most promising pathways to sustainability is decarbonising electricity generation and electrifying other sources of emissions such as transport and heating. This requires a near-total decarbonisation of power systems in the next few decades. Making strategical decisions regarding future power system design (e.g. what power plant to build) is challenging for a number of reasons. The first is their complexity: electricity grids can be immensely complicated, making the effect of e.g. an additional power plant difficult to estimate. The second is the considerable uncertainty about future technologies, fuel prices and grid improvements. Finally, especially as more weather-dependent renewables are added, there is climate-based uncertainty: we simply don’t know what the future weather will be, or how well times of high demand will line up with times of high renewable output.
This project aims to both understand the effect of climate-based uncertainty on power system planning problems and develop methodologies for robust decision-making under these unknowns. This will be done in the language of statistics, using techniques such as uncertainty quantification, data reduction and decision-making under uncertainty. Furthermore, this investigation will employ power system models, computer programs simulating the operation of an electricity grid.
Prof Axel Gandy
Advanced numerical techniques to assess erosion/flood risk in the coastal zone
An estimated 250 million people live in regions that are less than 5 metres above sea level. Hence with sea level rise and an increase in both the frequency and severity of storms as a result of climate change, the coastal zone is becoming an ever more critical location for the application of advanced mathematical techniques. Models are currently used to assist in the design of coastal zone engineering projects including flood defences and marine renewable energy arrays. There are many challenges surrounding the development and application of appropriate coupled numerical models because they include both hydrodynamic and sedimentary processes and need to resolve spatial scales ranging from sub-metre to 100s of kilometres.
My project aims to develop and use advanced numerical modelling and statistical tools to improve the understanding of hazards and the quantification and minimisation of erosion and flood risk. Throughout this project, I will consider the hazards in the context of idealised as well as real world scenarios. The main model I will use in my project is XBeach, which uses simple numerical techniques to compute dune erosion, scour around buildings and overwash. XBeach is also currently used, to a limited degree, with Monte Carlo techniques to generate a large number of storm events with different wave climate parameters. Uncertain atmospheric forcing is very important in erosion/scour processes and flood risk, which are intimately linked in many situations and cannot be considered in isolation. In my project I will explore how the new technique of Multi-level Monte Carlo simulations can be combined with XBeach to quantify erosion/flood risk. I am not only interested in the effects of extreme events, but also the cumulative effect of minor storm events for which Monte Carlo techniques are particularly appropriate. I will also explore how an adaptive mesh approach can be coupled with the statistical approach to assess the risk to coastal areas.
Prof Matthew David Piggott
Rate-induced tipping in non-autonomous random dynamical systems
The concept of a tipping point (or critical transition) describes a phenomena where the behaviour of a physical system changes drastically, and often irreversibly, compared to a small change in its external environment. Relevant examples in climate science are the possible collapse of the Atlantic Meridional Overturning Circulation (AMOC) due to increasing freshwater input, or the sudden release of carbon in peatlands due to an external temperature increase. The aim of this project is to develop the mathematical framework for tipping points and therefore contribute to a deeper understanding of them.
A number of generic mechanisms have been identified which can cause a system to tip. One such mechanism is rate-induced tipping, where the transition is caused by a parameter changing too quickly – rather than it moving past some critical value. The traditional mathematical bifurcation theory fails to address this phenomena. The goal of this project is to use and develop the theory of non-autonomous and random dynamical systems to understand rate-induced tipping in the presence of noise. A question of particular practical importance is whether it is possible to develop meaningful early-warning indicators for rate-induced tipping using observation data. We will investigate this question from a theoretical viewpoint and apply it to more realistic models.
Dr Martin Rasmussen
Analysis of Stochastic Slow-Fast systems
The Gulf Stream can be thought of as a giant meandering ribbon-like river in the ocean which originates in the Caribbean basin and carries warm water across the Atlantic to the west coast of Europe, keeping the European climate relatively mild. In spite of its significance to weather and climate, the Gulf Stream has remained poorly understood by oceanographers and fluid dynamicists for the past seventy years. This is largely due to the fact that the large-scale flow is significantly affected by multi-scale fluctuations known as mesoscale eddies. It is hypothesised that the mesoscale eddies produce a backscatter effect which is largely responsible for maintaining the eastward jet extensions of the Gulf Stream and other western boundary currents.
The difficulty in modelling such currents lies in the high computational cost associated with running oceanic simulations with sufficient resolution to include the eddy effects. Therefore approaches to this problem have been proposed which involve introducing some form of parameterisation into the numerical model, such that the small scale eddy effects are taken into account in coarse grid simulations.
There are three main approaches we may consider in including this parameterisation: the first is stochastic advection, the second is deterministic roughening and the third is data-driven emulation.These approaches have all be explored for relatively simple quasi-geostrophic ocean models, but we shall attempt to apply them to more comprehensive primitive equation models which have greater practical applications in oceanography. In particular we shall be using the MITgcm and FESOM2 models, to which we shall apply our parameterisations and run on a low-resolution grid and compare the results with high-resolution simulations.
Professor Xue-Mei Li
Bayesian inference with application to air quality monitoring
This project aims to develop new methodology for performing statistical inference in environmental modelling applications. These applications require the use of a large number of sensors that collect data frequently and are distributed over a large region in space. This motivates the use of a space time varying stochastic dynamical model, defined in continuous time via a (linear or non-linear) stochastic partial differential equation, to model quantities such as air quality, pollution level, and temperature. We are naturally interested in fitting this model to real data and, in addition, on improving on the statistical inference using a carefully chosen frequency for collecting observations, an optimal sensor placement, and an automatic calibration of sensor biases. From a statistical perspective, these problems can be formulated using a Bayesian framework that combines posterior inference with optimal design.
Performing Bayesian inference or optimal design for the chosen statistical model may be intractable, in which case the use of simulation based numerical methods will be necessary. We aim to consider computational methods that are principled but intensive, and given the additional challenges relating to the high dimensionality of the data and the model, must pay close attention to the statistical model at hand when designing algorithms to be used in practice. In particular, popular methods such as (Recursive) Maximum Likelihood, Markov Chain Monte Carlo, and Sequential Monte Carlo, will need to be carefully adapted and extended for this purpose.
Dr Nikolaos Kantas
Data-Driven Reduced Modelling of Oceanic Variability
The oceanic turbulent circulation exhibits multiscale motions on very different space and time scales interacting with each other; e.g., jets, vortices, waves, and large-scale variability. In particular, mesoscale oceanic eddies populate nearly all parts of the ocean and need to be resolved in order to represent their effects on the general ocean and atmosphere circulations. However, capturing effects of these small-scale flows is highly challenging and requires non-trivial approaches and skills, especially when it comes to representing their effects in non-eddy resolving ocean circulation models. Therefore, the main goal of my project is to develop data-driven eddy parameterizations for use in both eddy-permitting and non-eddy-resolving ocean models. Dynamical models of reduced complexity will be developed to emulate the spatio-temporal variability of mesoscale eddies as well as their feedbacks across a large range of scales. These can serve as a low-cost oceanic component for climate models; and therefore the final aim of this project is to use the existing observational data to feed eddy parameterizations in comprehensive ocean circulation and climate models such as the ones used in global weather forecasts or in Climate Model Intercomparison Project(CMIP) models like CMIP7.
We will employ a variety of both common and novel techniques and methods of statistical data analysis and numerical linear algebra to extract the key properties and characteristics of the space-time correlated eddy field. The key steps involved in this framework are, a) first, find the relevant data-adaptive basis functions, i.e. the decomposition of time evolving datasets into their leading spatio-temporal modes using, for example, variance-based methods such as Principal Component Analysis (PCA) and, b) once the subspace spanned by above basis functions are obtained, we derive the evolution equations that emulate the spatio-temporal correlations of the system using methods such as nonlinear autoregression, artificial neural network, Linear Inverse Modelling (LIM), etc.
The proposed new science will help develop a state-of-the-art data-adaptive modelling framework for evaluation and application of Machine Learning and rigorous mathematical theory for dynamical and empirical reduction within the hierarchy of existing oceanic models.
Dr Pavel Berloff
Adaptive Finite Element Methods for Landslide Prediction
The goal of this project is to develop and analyse innovative, cutting edge numerical tools for landslide prediction. These will be tested and eventually used by practitioners who require fast numerical computations to reliably predict these events. We will construct efficient numerical models that are able to simulate these phenomena and ultimately quantify uncertainty associated to them.
Geographically, the applications we will examine pertaining to landslides are specifically located in Brazil. The reason for this is that we have connections to a research centre, CEMADEN, that can provide us with up to date data, the opportunity to conduct fieldwork and ultimately generate important feedback to create a numerical model that is able to run accurate simulations within the specified operational time of the centre. In particular, the centre provides unique datasets to perform research on these extreme events that could become more and more routine in other parts of the world. It is expected that the numerical model developed through this PhD will considerably outperform the software they currently use and, in particular, will be critical in predicting and mitigating against extreme events.
What has been done: During the MRes project we constructed an adaptive numerical model based on rigorous a posteriori techniques that is able to account for measured conductivity and moisture content in the geological layer through simplifying the model to a Darcy flow. This allowed exploration of the numerical toolkit and testing some sample datasets. We have also a fieldtrip planned for 10th-20th August to the CEMADEN centre and some of the surrounding monitoring sites.
PhD work plan:
Year 1-2: The first goal is the extension to a potentially more realistic, nonlinear, degenerate parabolic PDE . In this case the diffusion coefficients/nonlinear capillarity effects are expected to be extremely heterogeneous (Figure 3) and we expect that mesh adaptivity, allowing for accurate resolution of the problem data, becomes crucial in the success of algorithms for this problem. One of the main mathematical challenges in this project is the development of efficient h-p adaptive approximation schemes [2, 3, 4] based on a posteriori error analysis. The nonlinear problem is a particularly challenging to simulate and requires care in the derivation of stable and consistent numerical methods.
Year 2: We will investigate new space-time discretisations of the problem to allow for more efficient local space-time adaptivity.
Year 3: We will investigate sparse data assimilation procedures to make use of the information obtained from the pluviometers to ascertain reasonable initial conditions. The inclusion of model error in the procedure will allow for a specific tolerance to be prescribed into the numerical scheme ensuring that only the amount of work necessary is conducted. The final goal is the development and integration of the model to allow for easy use by the CEMADEN team.
Dr Alex Lukyanov