### Lily Greig

**Based at:**University of Reading

Supervisors: David Ferreira (Reading)
Leads are fractures in sea ice. They provide a significant contribution to the polar heat balance despite making up only 5-10% of the sea ice cover, as gradients of sea ice concentration can result in lateral gradients in surface forcing and density gradients in the mixed layer. Through baroclinic instability, these fronts can energise submesoscale eddies. Submesoscale eddies have relatively fast time scales (hours to days), living in a parameter regime with finite Rossby and Richardson numbers. If energised they drive large horizontal exchange between ice-free and ice-covered ocean, and previous work showed that such dynamics could have an order 1 impact on the sea ice melt. Grid scales in the current generation of climate models are greater than the scale of submesoscale eddies and sea ice leads and ignore the effects of the sub-grid scale processes on the net polar heat balance. This project aims to explore these effects. It will start by building a mathematical model to develop understanding of the time and space scales of the density fronts formed under leads. Next it will explore under which conditions the density fronts may become unstable and spawn a submesoscale eddy field. Finally, this project will assess how subsmesoscale dynamics modulate air sea exchanges and if these processes should be included in climate models.

Supervisors: Valerio Lucarini (Reading), Jeroen Wouters (Reading), Ted Shepherd (Reading)
The investigation of the response of GFD systems to perturbations is a key area of research within the mathematics of climate. Problems as different as the prediction of climate change, the understanding of the coupling between different climatic subsystems, the generation of low-frequency atmospheric variability from high-frequency weather noise, and the construction of parametrizations can be addressed using such a point of view. Multiscale systems feature specific mathematical challenges in the construction of response operators, as a result of the presence of slow decay of correlations for some variables, like the ocean in the case of climate.
Using techniques from ergodic theory and chaos, we will be looking at the response of a multiscale system of geophysical relevance to perturbations in the forcing. Moreover we intend to investigate from both from the point of view of deterministic and stochastic dynamics. For example we aim to use Ruelle Response theory to gain insight in the deterministic case. Additionally, we will look into the properties of multiscale systems. These are topics of great relevance in dynamical systems theory, GFD, and statistical mechanics.
We aim at using the modular MAOOAM climate quasi-geostrophic model, which allows for a great flexibility of configurations and is able to describe both atmospheric and oceanic dynamics in a simplified yet meaningful way. Understanding the geometry of the tangent space is key to the Ruelle formalism and we hope to understand this through the use of covariant Lyapunov vectors. This will allow us to project the response along the stable and unstable directions in the tangent space and so elucidate the applicability of the fluctuation-dissipation theorem. An example of a climatic problem we aim to investigate within this quasi-geostrophic framework would be the coupling between high-frequency, synoptic-scale weather noise and low-frequency, planetary-scale circulation variability. Note this is underpinned by a multiscale asymptotic theory.
This project has strong links with the ongoing debate on state dependent response in the context of climate research. Further areas for exploration would be the construction of unstable periodic orbits for the system and analysis of how they can be used to reconstruct the invariant measure of the system and its response. A third line of investigation deals with the study of the response of the correlations of the fields, rather than observables and the investigation of the nearing to the tipping points

Supervisors: Valerio Lucarini (Reading), Andrey Gritsun (Russian Academy of Sciences)
Unstable periodic orbits (UPOs), have been proved to be a relevant mathematical tool in the study of Climate Science. In a recent paper Lucarini and Gritsun provided an alternative approach for understanding the properties of the atmosphere. Climate can be interpreted as a non-equilibrium steady state system and, as such, statistical mechanics can provide us with tools for its study.
UPOs decomposition plays a relevant role in the study of chaotic dynamical system. There is an intrinsic difficulty in sampling UPOs. Newton-like approaches have been proposed in the literature. The issue with these methods is that they are computationally expensive and do not guarantee convergence. During the PhD we would like to develop a new methodology to sample UPOs.
The idea is to populate with trajectories particular areas of interest of the attractor, reducing computational costs at a minimum. Ragone et al. developed a large deviation algorithm to sample rare events in a climate model. An ensemble simulation is performed, where the trajectories start from independent initial conditions that sample the invariant measure of the model. After a fixed time period the simulation is stopped and a score function is associated to each trajectory, depending on the dynamic up to that point. The trajectories that are going towards the region of interest, according to the score function, are copied, and the ones that perform badly are killed. At this stage, the surviving trajectories are slightly perturbed, so that they can evolve differently, and resampling is iterated another time. We believe we could propose a similar approach for sampling UPOs in a climate model. Positive results of UPOs search algorithms can be found in the literature. In particular, Gritsun and Lucarini first implemented this approach in a geophysical setting.
Once computed the UPOs, we would like to use them to reconstruct the invariant measure of the system and study the response of the system to perturbations. A well known result shows that it is possible to evaluate the expectation value of a reference function as an average over the various UPOs, where each one is weighted depending on its instability. The least unstable orbit will have the largest contribution, since it affects more heavily the dynamic of the system. Pollicott and Vytnova provided an alternative decomposition of the average of a reference function in power series, where the coefficient can be computed in terms of periodic points. This decomposition provides a computational more efficient means of approximation. We would like to investigate further this approach on the chosen climate model.
In a first phase of the project the student will familiarise with the mathematical background, working with a simpler model such as Lorenz '96. The main aim will be understanding and applying known methodologies for finding UPOs. Those techniques will then be extended to the more complex model MAOOAM, which has been found to exhibit chaotic behaviour for some values of the parameters. The challenge will be implementing a multiscale approach on this mode, as we have that ocean dynamics and atmosphere dynamics run on completely different time scale. In the last part of the project, the computed UPOs will be used to reconstruct the invariant measure of the system and study its response to external forcing. The innovation in the project is brought by the idea of finding a connection between rare events algorithms literature and UPOs. As far as we know this has never been done.

Supervisors: Henrik Jensen (Imperial), G. Pavliotis (Imperial)
The aim of the project is to develop forecasting techniques in order to predict the onset of ecological crises. It is well known that these critical transitions (also known as tipping points) happen in many different areas, from ecological and natural systems to social and political ones. As a parameter is changed, for example the emission of CO2 or the global average temperature, the system becomes more and more unstable, meaning that a small fluctuation can lead it to a different, and usually more disastrous, state.
The project will proceed on two parallel paths. On one hand, since we still lack of a coherent mathematical framework describing these phenomena, we are going to borrow ideas from well-developed theories, such as statistical mechanics, the theory of phase transitions and the theory for fast/slow systems, to construct a solid toolbox to interpret critical transitions.
On the other hand, we are going to investigate methodologies that provide us a way to detect in time series data the onset of a crisis. Some work has already been done in this direction; however, robust data-driven indicators for these critical transitions are still missing. We are going to use the ideas developed in the more theoretical part of the project firstly on simulated data from models and then to real world data sets.

Supervisors: Dan Crisan (Imperial), Darryl Holm (Imperial), Matthew Piggott (Imperial)
The issue of ocean plastics has recently been much discussed by academics, policy makers, and environmental campaigners. The mathematical models which are used to describe the advection of plastics have largely ignored key factors such as sub-grid-scale dynamics and the mass of the debris. This raises the interesting question of how inertial particles move in a fluid governed by a SPDE. Using recent developments in stochastic fluid equations [Holm 2015] as a springboard, we will explore how the introduction of transport noise affects the properties (such as well posedness and smoothness) of a fluid model. In particular, can this type of noise restore uniqueness to a model? Furthermore, we will input the velocity field of the fluid into an equation which will return the velocity of the debris [Maxey & Riley, 1983], exploring the validity of doing this and whether this accurately models reality. Such a model would have applications in predicting the motion of ocean debris (such as icebergs, plastics, or aircraft wreckage) and, considering the model as an inverse problem, calibrating ocean models from drifter buoy data by understanding how the movement of the buoys differs from that of the fluid.

Supervisors: Ted Shepherd (Reading), Jana de Wiljes (Potsdam), Antje Weisheimer (ECMWF)
Predictions of future climate change are usually represented as best estimates with an uncertainty range. However, at the regional scale, changes in atmospheric circulation play a large role and several outcomes may be possible. Under such conditions, an aggregate approach does not provide informative statements about risk. Storylines provide a way to represent the uncertainty in climate change itself, but need to be embedded within a probabilistic framework to account for the uncertainty in the particular realization of climate that will occur.
In this PhD project we use Bayesian causal networks to combine the storyline approach with probability. We focus on atmospheric circulation regimes in the Euro-Atlantic sector, since these have a large influence on the weather over Europe, and study their link with regional changes in extreme events. To inform the derivation of the causal network, expert knowledge will be used, which can be (partially) based on dynamical relationships derived from complex simulation models. The network will incorporate memory effects present in these dynamical relationships, which can give rise to persistent circulation anomalies. This will lead to a stronger physical foundation of the derived causal networks.

Supervisor: Pavel Berloff (Imperial)
Geostrophic motions flowing over rough topography in the energetic Antarctic circumpolar current are known to be an important source of mixing, through both locally breaking lee waves and non-propagating vortical motions. This project will investigate the impact of these mechanisms on turbulent breakdown near rough bottom topography in order to better understand mixing characteristics in the Southern Ocean.

Supervisors: Simon Chandler-Wilde (Reading), Chris Westbrook (Reading), Stephen Langdon (Brunel)
Linear wave scattering problems are ubiquitous in science and engineering applications. In the atmosphere, visible and ultraviolet radiation from the sun is scattered by ice crystals in cirrus clouds in the cold upper troposphere. These same clouds also scatter and absorb infrared radiation emitted from the earth's surface and lower troposphere. Together, these effects exert an important influence on the earth's radiation balance, and must be represented correctly in numerical climate models.
There are a number of unresolved problems with computing light scattering from an ice crystal in this regime. Firstly, the size of the ice particle is typically large compared to the wavelength of light illuminating it. This means that conventional numerical methods for such problems are prohibitively expensive. As a result the state-of-the-art in ice cloud radiative transfer is the use of ray-tracing (geometric optics). However, this approach cannot capture the effects of diffraction at the corners and edges of the ice crystal. This is typically overcome via a crude correction after computing the ray-tracing solution, but it is not clear whether this is accurate.
A second unresolved problem lies in the nature of the crystal surfaces themselves. Geometric optics is valid for surfaces which are flat and smooth. Real ice particles often have imperfections (roughness): such as steps, or pits, or in some cases may be rounded lumps. This roughness is in fact one of the leading order controls on the far field scattering pattern, and representing it properly is therefore a high priority. We would like to understand the influence of this roughness from a fundamental level, and therefore better constrain the way it is represented in radiative transfer.
The main tool for investigating these questions is the development of numerical methods for simulating light scattering that are able to capture the effects of diffraction whilst remaining computationally tractable across the frequency spectrum. One promising approach is the development of the class of methods known as "hybrid numerical-asymptotic boundary element methods" (HNABEM), which have been proven to be exceptionally efficient at solving a range of scattering problems, with various boundary conditions and geometries. To date though, the range of geometries for which these methods have been shown to be applicable is insufficient to address realistic questions in atmospheric science. This project aims to develop algorithms closely related to HNABEM yet containing key new ideas that will allow them to be applied to more general scattering problems, and to use these algorithms to investigate questions such as those posed above.
Firstly we will continue previous work on problems of scattering by multiple screens, with a goal being to consider scattering by a rough needle like ice crystals in two dimensions, and to compare qualitative results obtained with similar results for a standard three- dimensional solver so as to understand the potential usefulness of studying related two-dimensional problems. Secondly we will compare results obtained by state of the art approaches for high frequency three dimensional electromagnetic transmission problems in atmospheric physics (e.g., Physical Geometric-Optics Hybrid (PGOH) methods) with those obtained by standard numerical solvers, such as BEM++, so as to understand the significance of the diffraction missed by PGOH, and to develop ideas for approximating this difference via HNABEM.
The EPSRC research area that is of most relevance to the project is Numerical Analysis, with the project focused on the development, analysis and implementation of numerical methods for the solution of problems arising in atmospheric physics.

Supervisors: Hilary Weller (Reading Meteorology), Colin Cotter (Mathematics, Imperial), Christian Kühnlein (ECMWF)
Transport, or advection, is arguably the most important part of an atmospheric prediction model. Everything in the atmosphere is transported by the wind - temperature, pollutants, moisture, clouds and even the wind itself (non-linear advection). Operational weather and climate centres, such as the Met Office and ECMWF, are developing new atmospheric dynamical cores to run on modern computer architectures and they need accurate, efficient and conservative advection schemes that are stable for long time steps suitable for their new models. Their current transport schemes are accurate and stable for long time steps but do not conservative. This project will develop implicit methods to achieve long stable time steps on arbitrary grids of the sphere for linear and non-linear problems. We will start by creating a model for Rayleigh-Benard convection and we will develop a Newton solver to achieve long, stable time steps.

Supervisor: Alex Lukyanov (Reading)
Motivation
Deposition and dissolution geomorphic patterns are fundamental in the environment, and understanding the mechanisms underpinning their formation and evolution is central to paleo-reconstruction techniques used to probe past climatological systems. Such problems pose numerous mathematical challenges including motion of free boundaries (wall-shape as well as liquid-air interfaces), fluid-structure interactions and multiscale features in both space and time. The latter can pose particular challenges for lab experiments (for instance features can form over time scales of hundreds of years) and we propose a complete theoretical approach to quantify fundamental physical systems in order to compare with field observations.
Description, Mathematical Problems and Project Work Plan
We will begin with deposition problems and morphological instabilities, termed "crenulations", over calcite surfaces such as stalagmites, and also consider related dissolution problems. The formation mechanism is driven by a liquid film flowing down the structure and carrying slightly supersaturated calcium carbonate that deposits onto the wall. The change in wall-shape affects the flow and the flow affects the shape, albeit at different time scales. The Stokes/Navier-Stokes equations need to be addressed along with the underlying chemistry and surface growth models. We will use mathematical modelling, asymptotic analysis of partial differential equations, and numerical computations to advance the field in several crucial directions not considered previously.
In what follows we provide an overview of some of the mathematics along with an itemised work plan of the proposed thesis:
Realistic geometries: The state-of-the-art consists of modelling the stalagmite as a flat plate with an under-lying liquid film that naturally lead to Rayleigh-Taylor instabilities and absolute-convective transitions. The correct conical geometry is quite different from the flat plate picture.
New mathematical approaches: We will use full 3D axisymmetric geometries and analyse flows over slowly varying conical geometries using a combination of asymptotics and numerical simulations using finite-element and/or finite-volume methods. 3D axisymmetric geometries support waves that are not found in 2D and we propose to fully quantify the impact of such axial non-uniformities on the crenulation patterns. Our theoretical objectives are to analyse the flows in the appropriate geometries encountered in the field.
Novel instability mechanisms: The non-uniform 3D axisymmetric flows (that will be calculated during the project) over tapering conical geometries that are undergoing crenulation modulations are expected to support instabilities that have not been addressed previously. The spatiotemporal fluid structure interaction is expected to support new instabilities that will be analysed and compared with geomorphological features. Both linear and nonlinear analyses will be carried out.
Dissolution flows: A related parallel study of dissolution flows is also proposed. Here the flow sculpts the boundary at a rate that depends on the local fluid shear at the wall, leading to a dynamic fluid-structure interaction that is important in determining the global flow behaviour, for example laminar to turbulent transitions. Several mathematical issues remain to be studied, such as corner formation in dissolution flows past solid objects. In the vicinity of such features we propose to use matched asymptotic expansions in the spirit of triple-deck theory in order to provide the correct regularisation of the singularity and evaluate its effect on the global dynamics.

Supervisors: Colin Cotter (Imperial), David Ham (Imperial)
Building an ocean model using compatible finite element numerics. This will begin with the following steps:
1. extend the linear incompressible nonhydrostatic Boussinesq implicit solver to include free surface (using the combined pressure-free surface approach.
2. Plug this solver into the Gusto modelling system and use existing Gusto advection solvers to construct a semi-implicit nonlinear equation solver loop using the linear Boussinesq solver as the implicit solver step.
At this stage we will have a full numerical scheme for the nonlinear nonhydrostatic incompressible Boussinesq equations and will be able to investigate performance and accuracy through standard test problems.
Going beyond that we could then work on a number of different topics establishing this system as a useable ocean modelling research tool: salinity and equation of state, turbulence closures e.g. Gent-McWilliams, work towards
wind driven baroclinic gyres, representation of bathymetry, parallel scalability, stability over large timesteps, etc. The supervisory team includes a researcher who leads Gusto code development (Shipton) and an external
collaborator who has been using Firedrake to develop a coastal ocean model.

Supervisors: Paulo Ceppi (Imperial), Pavel Berloff (Imperial)
A major challenge for climate science is to predict how atmospheric circulation will change as the planet warms. Current climate models generally predict a poleward shift of the midlatitude jet streams and storm tracks with warming. However, the climate science community faces two major difficulties in predicting future jet stream changes. First, the projected jet responses to greenhouse gas forcing are not robust in climate models; second, the dynamical mechanisms responsible for future jet shifts remain uncertain. Taken together, these two obstacles lead to particularly low confidence in climate model projections of jet stream changes.
The approach of my PhD project is to use a hierarchy of general circulation models to better understand the relationship between the initial state of the jet and its response to forcing. At the lowest order of this hierarchy we will use a barotropic model which represents the interaction of Rossby waves with the mean flow. The barotropic model findings will be used to interpret results from more realistic climate models. The model results will be interpreted physically using linear Rossby wave theory.

Supervisors: Ali Mashayek (Imperial), Graham Hughes (Imperial)
It is an emerging picture that deep ocean turbulence exerts a control over the climate system through regulating the oceanic uptake and redistribution of heat, carbon, nutrients and other tracers. Observations of such turbulence, and our ability to model it numerically, however, have been limited if non-existent until very recently. The challenge ahead is to understand physics of such turbulence to help represent them properly in climate models that are coarse resolution, hence incapable of resolving such processes.
The aim of this project is to better quantify the energy pathways from the mean geostrophic flow to dissipation and mixing through interactions with bottom topography, first through the generation of overturning lee waves above ridge-like topography and later through the generation of lee vortices at seamounts. Recent realistic and high resolution simulations of the Drake Passage provide an invaluable case study of flow topography interactions in an energetic and dynamically relevant area of the Southern Ocean. We will isolate these key processes from the simulations, analyse them for their contribution to energy pathways in the Southern Ocean, and create theoretical frameworks in order to enable better parametrisation of the processes in coarse resolution models.

Supervisors: Paul Williams (Reading), Nancy Nichols (Reading), Ian Poll (Poll AeroSciences Ltd)
Air travel is the subject of much current controversy. Statistics for fuel use and CO2 emissions make uncomfortable reading for both airlines and environmental groups. Today’s flight routes avoid areas of strong headwinds and make use of available tailwinds, for a set optimal low fuel burn air speed. During the MRes phase of the project, however, it was shown that these trajectories do not always minimise fuel burn.
Airlines are keen to be able to provide a timetable that is unaffected by a particularly strong wind field. Delays are costly and early arrival can often result in extra fuel burn due to holding patterns. This PhD project will find optimal routes to minimise fuel burn for set departure and arrival times. Varying both airspeed and altitude, whilst considering the expected background wind field and the change in aircraft mass due to fuel burn, will provide a realistic model for the cruise phase of transatlantic flights.
Using Optimal Control theory, the dynamical system of routing equations derived in each situation can be solved numerically. The fuel burn statistics from the model can then be compared with recent actual flight data and recommendations made to the airline industry.
Lead supervisor: Paul Williams (Reading)
Co-supervisors: Dante Kalise (Imperial) and Nancy Nichols (Reading) Industrial co-supervisor: Ian Poll (Poll AeroSciences Ltd)

Supervisors: Professor Ted Shepherd (Lead Supervisor, Department of Meteorology, University of Reading), Professor Sebastian Reich (Department of Mathematics and Statistics, University of Reading), Dr Jeff Knight (Met Office)
Project summary: Although there is confidence in thermodynamic aspects of global climate change for a given global warming, crucially there is still large uncertainty in the dynamical response at the regional scale. This is due to the role of atmospheric circulation, projected changes in which are poorly constrained by Global Climate models (GCMs) which give widely divergent responses, reflecting underlying model errors.
In order to identify the physical range of plausible responses, it is firstly necessary to identify models’ errors on short-timescale behaviour, for instance by comparing outputs with observed seasonal variability. Secondly, the connection between such errors and their spread in future projection needs to be understood and used to rule out unphysical projections. Within climate science this method is referred to as ‘emergent constraints’, its validity being rooted in the principles behind the fluctuationdissipation theorem (FDT) in statistical physics. Whilst promising, the application of emergent constraints in climate science has often failed, this due arguably to unsuitable practical estimations of both short-term errors and connection with long-term responses.
In this PhD we aim to tackle the issue of constraining the circulation response to climate change adopting time-series Bayesian Causal network (BCNs). This is a mathematical framework suitable to address questions related to causality, and its practical implementation results in a tool for robust statistical inference. A N-variate time evolving process can be associated to a time-series BCN by representing relations of pairwise conditional dependence in the process as lag-specific time-oriented links in the graph. The definition translates into a practical procedure for inferring causal links from data, once a test for conditional independence is chosen.
In the PhD we will use BCNs to estimate model errors on seasonal time scales, by comparing causal mechanisms as detected from reanalysis data with the same as extracted from model outputs. Then, we will connect short-timescale model errors to the long-term projection. The idea here is to complement the FDT-based reasoning with the insights in data provided by BCN representation.

Principal Supervisor: Prof. Pavel Berloff (Department of Mathematics, Imperial College London)
Co-advisor: Peter Dueben, (ECMWF)
Summary: The oceanic turbulent circulation exhibits multiscale motions on very different space and time scales interacting with each other; e.g., jets, vortices, waves, and large-scale variability. In particular, mesoscale oceanic eddies populate nearly all parts of the ocean and need to be resolved in order to represent their effects on the general ocean and atmosphere circulations. However, capturing effects of these small-scale flows is highly challenging and requires non-trivial approaches and skills, especially when it comes to representing their effects in non-eddy resolving ocean circulation models. Therefore, the main goal of my project is to develop data-driven eddy parameterizations for use in both eddy-permitting and non-eddy-resolving ocean models. Dynamical models of reduced complexity will be developed to emulate the spatio-temporal variability of mesoscale eddies as well as their feedbacks across a large range of scales. These can serve as a low-cost oceanic component for climate models; and therefore the final aim of this project is to use the existing observational data to feed eddy parameterizations in comprehensive ocean circulation and climate models such as the ones used in global weather forecasts or in Climate Model Intercomparison Project(CMIP) models like CMIP7.
We will employ a variety of both common and novel techniques and methods of statistical data analysis and numerical linear algebra to extract the key properties and characteristics of the space-time correlated eddy field. The key steps involved in this framework are, a) first, find the relevant data-adaptive basis functions, i.e. the decomposition of time evolving datasets into their leading spatio-temporal modes using, for example, variance-based methods such as Principal Component Analysis (PCA) and, b) once the subspace spanned by above basis functions are obtained, we derive the evolution equations that emulate the spatio-temporal correlations of the system using methods such as nonlinear autoregression, artificial neural network, Linear Inverse Modelling (LIM), etc.
The proposed new science will help develop a state-of-the-art data-adaptive modelling framework for evaluation and application of Machine Learning and rigorous mathematical theory for dynamical and empirical reduction within the hierarchy of existing oceanic models.

Supervisors: Matthew Piggott (Lead supervisor, Department of Earth Science & Engineering, Imperial College London) and Colin Cotter (Department of Mathematics, Imperial College London). Industry supervisor: Dr Catherine Villaret (East Point Geo Consulting).
Summary: An estimated 250 million people live in regions that are less than 5 metres above sea level. Hence with sea level rise and an increase in both the frequency and severity of storms as a result of climate change, the coastal zone is becoming an ever more critical location for the application of advanced mathematical techniques. Models are currently used to assist in the design of coastal zone engineering projects including flood defences and marine renewable energy arrays. There are many challenges surrounding the development and application of appropriate coupled numerical models because they include both hydrodynamic and sedimentary processes and need to resolve spatial scales ranging from sub-metre to 100s of kilometres.
My project aims to develop and use advanced numerical modelling and statistical tools to improve the understanding of hazards and the quantification and minimisation of erosion and flood risk. Throughout this project, I will consider the hazards in the context of idealised as well as real world scenarios.
The main model I will use in my project is XBeach, which uses simple numerical techniques to compute dune erosion, scour around buildings and overwash. XBeach is also currently used, to a limited degree, with Monte Carlo techniques to generate a large number of storm events with different wave climate parameters. Uncertain atmospheric forcing is very important in erosion/scour processes and flood risk, which are intimately linked in many situations and cannot be considered in isolation. In my project I will explore how the new technique of Multi-level Monte Carlo simulations can be combined with XBeach to quantify erosion/flood risk. I am not only interested in the effects of extreme events, but also the cumulative effect of minor storm events for which Monte Carlo techniques are particularly appropriate. I will also explore how an adaptive mesh approach can be coupled with the statistical approach to assess the risk to coastal areas.
My project aims to develop and use advanced numerical modelling and statistical tools to improve the understanding of hazards and the quantification and minimisation of erosion and flood risk. Throughout this project, I will consider the hazards in the context of idealised as well as real world scenarios. The main model I will use in my project is XBeach, which uses simple numerical techniques to compute dune erosion, scour around buildings and overwash. XBeach is also currently used, to a limited degree, with Monte Carlo techniques to generate a large number of storm events with different wave climate parameters. Uncertain atmospheric forcing is very important in erosion/scour processes and flood risk, which are intimately linked in many situations and cannot be considered in isolation. In my project I will explore how the new technique of Multi-level Monte Carlo simulations can be combined with XBeach to quantify erosion/flood risk. I am not only interested in the effects of extreme events, but also the cumulative effect of minor storm events for which Monte Carlo techniques are particularly appropriate. I will also explore how an adaptive mesh approach can be coupled with the statistical approach to assess the risk to coastal areas.

Supervisors: Martin Rasmussen (Imperial College London, Department of Mathematics), Jochen Broeker (University of Reading, Department of Mathematics and Statistics), Pavel Berloff (Imperial College London, Department of Mathematics)
Summary: The concept of a tipping point (or critical transition) describes a phenomena where the behaviour of a physical system changes drastically, and often irreversibly, compared to a small change in its external environment. Relevant examples in climate science are the possible collapse of the Atlantic Meridional Overturning Circulation (AMOC) due to increasing freshwater input, or the sudden release of carbon in peatlands due to an external temperature increase. The aim of this project is to develop the mathematical framework for tipping points and therefore contribute to a deeper understanding of them.
A number of generic mechanisms have been identified which can cause a system to tip. One such mechanism is rate-induced tipping, where the transition is caused by a parameter changing too quickly - rather than it moving past some critical value. The traditional mathematical bifurcation theory fails to address this phenomena. The goal of this project is to use and develop the theory of non-autonomous and random dynamical systems to understand rate-induced tipping in the presence of noise. A question of particular practical importance is whether it is possible to develop meaningful early-warning indicators for rate-induced tipping using observation data. We will investigate this question from a theoretical viewpoint and apply it to more realistic models.

Supervisors: Xue-Mei Li (Department of Mathematics, Imperial College London, Lead supervisor), Darryl Holm ( Department of Mathematics, Imperial College London), Dan Crisan (Department of Mathematics, Imperial College London)
Summary: The Gulf Stream can be thought of as a giant meandering ribbon-like river in the ocean which originates in the Caribbean basin and carries warm water across the Atlantic to the west coast of Europe, keeping the European climate relatively mild. In spite of its significance to weather and climate, the Gulf Stream has remained poorly understood by oceanographers and fluid dynamicists for the past seventy years. This is largely due to the fact that the large-scale flow is significantly affected by multi-scale fluctuations known as mesoscale eddies. It is hypothesised that the mesoscale eddies produce a backscatter effect which is largely responsible for maintaining the eastward jet extensions of the Gulf Stream and other western boundary currents.
The difficulty in modelling such currents lies in the high computational cost associated with running oceanic simulations with sufficient resolution to include the eddy effects. Therefore approaches to this problem have been proposed which involve introducing some form of parameterisation into the numerical model, such that the small scale eddy effects are taken into account in coarse grid simulations.
There are three main approaches we may consider in including this parameterisation: the first is stochastic advection, the second is deterministic roughening and the third is data-driven emulation.
These approaches have all be explored for relatively simple quasi-geostrophic ocean models, but we shall attempt to apply them to more comprehensive primitive equation models which have greater practical applications in oceanography. In particular we shall be using the MITgcm and FESOM2 models, to which we shall apply our parameterisations and run on a low-resolution grid and compare the results with high-resolution simulations.

Supervisors: Nikolas Kantas (Department of Mathematics, Imperial College London, Lead supervisor), Professor Alistair Forbes (NPL)
Summary: This project aims to develop new methodology for performing statistical inference in environmental modelling applications. These applications require the use of a large number of sensors that collect data frequently and are distributed over a large region in space. This motivates the use of a space time varying stochastic dynamical model, defined in continuous time via a (linear or non-linear) stochastic partial differential equation, to model quantities such as air quality, pollution level, and temperature. We are naturally interested in fitting this model to real data and, in addition, on improving on the statistical inference using a carefully chosen frequency for collecting observations, an optimal sensor placement, and an automatic calibration of sensor biases. From a statistical perspective, these problems can be formulated using a Bayesian framework that combines posterior inference with optimal design.
Performing Bayesian inference or optimal design for the chosen statistical model may be intractable, in which case the use of simulation based numerical methods will be necessary. We aim to consider computational methods that are principled but intensive, and given the additional challenges relating to the high dimensionality of the data and the model, must pay close attention to the statistical model at hand when designing algorithms to be used in practice. In particular, popular methods such as (Recursive) Maximum Likelihood, Markov Chain Monte Carlo, and Sequential Monte Carlo, will need to be carefully adapted and extended for this purpose.

Supervisors: Prof Axel Gandy (Statistics Section, Department of Mathematics, Imperial College London), Dr David Brayshaw (Department of Meteorology, University of Reading)
In the face of climate change, considerable efforts are being undertaken to reduce carbon emissions. One of the most promising pathways to sustainability is decarbonising electricity generation and electrifying other sources of emissions such as transport and heating. This requires a near-total decarbonisation of power systems in the next few decades.
Making strategical decisions regarding future power system design (e.g. what power plant to build) is challenging for a number of reasons. The first is their complexity: electricity grids can be immensely complicated, making the effect of e.g. an additional power plant difficult to estimate. The second is the considerable uncertainty about future technologies, fuel prices and grid improvements. Finally, especially as more weather-dependent renewables are added, there is climate-based uncertainty: we simply don’t know what the future weather will be, or how well times of high demand will line up with times of high renewable output.
This project aims to both understand the effect of climate-based uncertainty on power system planning problems and develop methodologies for robust decision-making under these unknowns. This will be done in the language of statistics, using techniques such as uncertainty quantification, data reduction and decision-making under uncertainty. Furthermore, this investigation will employ power system models, computer programs simulating the operation of an electricity grid.

Supervisors: Tristan Pryer (University of Reading, Department of Mathematics and Statistics, Lead supervisor)
Summary: Hierarchical modelling is a common feature in many application areas. Indeed, most large scale geophysical simulations are built upon the basis of modelling phenomena with systems of PDEs. Depending on the application and the scale of the features needing to be simulated various levels of approximation are conducted, based on some underlying physical reasoning, resulting in a hierarchy of PDE models. At the top level of this hierarchy sits a PDE system that contains all information currently known about the process. For example, climate models contain a huge amount of information, including atmospheric composition, hydrology, impacts of ice sheets, human influence, vegetation, oceanographic aspects, solar inputs and so on. These extremely complicated mathematical models are far too complex to construct any analytical solution method for the resultant system, so, practically, reductions are made, with information being ignored so that the system has a lower complexity. Naturally, this reduction gives rise to hierarchies of models. I study how to make use of these hierarchies from the numerical perspective.

When modelling complicated physical systems such as the ocean/atmosphere system with relatively simple mathematical models based on (ordinary/partial, deterministic/stochastic) dierential equations, we expect some discrepancy between the mathematical model and the actual physical system. It is by now well understood that model error, plays an important role in the delity of the mathematical model and on its predictive capabilities. Model uncertainty, together with additional sources of randomness due, e.g. to incomplete knowledge of the current state of the system, sensitive dependence on initial conditions, parameterization of the small scales etc, should be taken into account when making predictions about the system under investigation.
In addition, many climatological models exhibit 'tipping points' - critical transitions where the output of the model changes disproportionately compared to the change in a parameter. [LHK+08] documents several, the most pertinent to British weather being the Stommel-Cessi box model for Atlantic thermohaline circulation, which suggests the collapse of the Atlantic Meridional Overturning Circulation, upon small changes in freshwater input.
Weather forecasting bodies overcome these inherent difficulties ensemble techniques (or probabilistic forecasting), running multiple simulations accounting for the range of possible scenarios. A forecast should then skilfully indicate the confidence the forecaster can have in their prediction, by accurately representing uncertainty [AMP13]. Clearly, model uncertainty can have a dramatic effect on the predictive capabilities of our mathematical model when we are close to a noise induced transition, a tipping point or a phase transition. This poses an important mathematical question: how can we systematically quantify the propogation of uncertainty through the model, from model parameters and initial conditions, to model-output, even in cases of 'tipping'?
[LHK+08] Timothy M. Lenton, Hermann Held, Elmar Kriegler, Jim W. Hall, Wolfgang Lucht, Stefan Rahmstorf, and Hans Joachim Schellnhuber. Tipping elements in the earth's climate system. Proceedings of the National Academy of Sciences, 105(6):1786{1793, 2008.
[AMP13] H. M. Arnold, I. M. Moroz, and T. N. Palmer. Stochastic parametrizations and model uncertainty in the Lorenz '96 system. Philosophical Transactions of the Royal Society of London Series A, 371:20110479{20110479, April 2013.
Supervisor: G.A. Pavliotis (Imperial College London); V. Lucarini (U. Reading)

Supervisors: Clare Watt, Department of Meteorology, Reading (Main), Paul Williams, Department of Meteorology, Reading (Co-Supervisor)
Abstract: Space Weather is the name given to the natural variability of the plasma and magnetic field conditions in near-Earth Space. 21st Century technology is increasingly reliant on space-based assets and infrastructure that are vulnerable to extreme space weather events. Due to the sparse nature of in-situ measurements, and the relative infancy of numerical space plasma physics models, we lack the ability to predict the timing and severity of space weather disruptions to either mitigate their effects, or adequately plan for their consequences.
In this project, we focus on important improvements to the numerical modelling of the Earth’s Outer Radiation Belt; a highly-variable region of energetic electrons in near-Earth space. In the Outer Van Allen Radiation Belt, electrons are trapped by the Earth's magnetic field and can be accelerated to a significant fraction of the speed of light. At such high energies, they pose significant hazards to spacecraft hardware. Most importantly for mankind’s reliance on space-based systems, the Outer Radiation Belt encompasses orbital paths that are of great use to society (e.g. geosynchronous orbit, and Medium Earth Orbits that contain global positioning and navigation systems).
The student will construct idealised numerical models of simple 1D diffusion problems with Dirichlet or Neumann boundary conditions and investigate their behaviour when appropriate stochastic parameterisations of diffusion coefficients are chosen. Initial and boundary values will be chosen to mimic realistic values in near-Earth space, and the solutions from the stochastic model will be compared with solutions from a traditional deterministic model. Given the novel nature of stochastic parameterisations in the field of space plasma physics modelling, the results from the MRes project will provide an important demonstration of the differences between stochastic and deterministic modelling and offer ideas of how to shape space weather models moving forward.

Supervisors: Valerio Lucarini (lead, U. Reading), Jochen Broecker (U. Reading), Tobias Kuna (U. Reading)
Climate is a complex, forced, non-equilibrium dissipative system that can be understood as a high-dimensional dynamical system. Moreover, climate is subject to different kinds of forcing that create fluctuations in the governing dynamics. In our project we shall delve into the so called transfer operator methods in dynamical systems. The transfer operator is a mathematical device that describes the evolution of distributions in phase-space. As such, it captures the information related to the statistics of our system and allows to construct a response theory based on it. I my project, we will concerned with the validity of these methods in a geophysics context. We will study the properties of these operator in coarse-grained phase-space and how they capture information about the (perturbed) dynamics.
By working on phase-space on can construct matrix-approximates of te transfer operator. In particular, we will study the validity of response formulas based on these approximates to investigate its applicability. What is the suitable mathematical framework for these formulas to be valid? How well do they capture the effects of perturbations? Further, since real-world systems are high-dimensional, we will asses the problem of the dimensionality reduction. When the dynamics are projected onto the variables of interest, some of the properties of the transfer operator are lost. What are the mechanisms that provoke the loss of these properties? An answer to these questions will give evidence of the applicability of transfer operator methods in the study of climate putting an emphasis on its structural statistical properties.

Supervisor: Valerio Lucarini (Department of Mathematics and Statistics, University of Reading)
Co-advisor: Paul Williams (Department of Meteorology, University of Reading), Niklas Boers (Grantham Institute - Climate Change and the Environment, Imperial College London)
Description: The construction of parameterisation for multi-scale systems system is a key research area for GFD, because the dynamics of atmosphere and of the ocean covers a wide range of temporal and spatial scales of motion (Berner et al. 2017). Additionally, the variability of the geophysical fluids is characterized by a spectral continuum, so that it is not possible to define unambiguously a spectral gap separating slow from fast motions. As a result, usual mathematical methods based on homogeneization techniques cannot be readily applied to perform the operation of coarse graining. As shown in recent literature (Chekroun et al. 2015, Wouters and Lucarini 2012, 2013, Demayer and Vannitsem 2017, Vissio and Lucarini 2017), the lack of time scale separation leads unavoidably to the presence of non-markovian terms when constructing the effective equations for the slower modes of variability - which are those we want to explicitly represent - able to surrogate the effect of the faster scales - which are, instead, those we want to parameterise.
Two methods have been proposed to deal effectively and rigorously with this problem:
1) The direct derivation of effective evolution equations for the variables of interest, obtained through a perturbative expansion of the Mori-Zwanzig operator (Wouters & Lucarini 2012, 2013);
2) The reconstruction of the effective evolution equations for the variables of interest though an optimization procedure due to Kondrashov et al. (2015) and Chekroun et al. (2017).
Both methods (which we refer to as top-down and bottom-up, respectively) lead to the definition of parameterisation including a deterministic, a stochastic, and a non-markovian (memory effects) component. The two methods are conceptually analogous, but have never been compared on a specific case study of interest. The MSc project here proposed builds upon the earlier results of Vissio and Lucarini (2017) and deals with constructing and comparing the two parameterisation for the 2-level Lorenz ’96 system, which provides a classic benchmark for testing new theories in GFD. The goal will be to understand merits and limits of both parameterisations and to appreciate their differences in terms of precision, adaptivity, and flexibility.

Supervisors:
Tristan Pryer – University of Reading (Lead Supervisor)
Alex Lukyanov – University of Reading
Cassiano Bortolozo – Brazil, Centro Nacioal de Monitoramento e Alertas de Desastres Naturais (CEMADEN)
Summary of the project: Landslides are extreme events that occur when the topsoil on a hill becomes weakened. The result of this can be devastating, both through loss of life and also economic damage. In 2011 a series of floods and mudslides took place in the state of Rio de Janeiro, Brazil. This catastrophe caused over 900 people to lose their lives. This was the driving force behind the creation of the National centre for natural disaster monitoring and alerts (CEMADEN).
For my MRes project, I applied a simple adaptive scheme to numerically solve a simplified PDE model of flow in a porous medium. Data was collected by CEMADEN in an area considered to be at risk from landslides and incorporated into the model to test its sensitivity to the huge variation in soil parameters that determine the flow. Mesh adaptivity was informed by rigorous error estimates involving only the problem data and the numerical solution. Deriving such estimates is known as a posteriori error analysis. The resulting mesh was found to capture the influences of the multiscale data on the solution quite well, but with some undesirable numerical artefacts.
The model used, however, was a heavy simplification. Thus, one of the first steps of my PhD research will be to investigate strategies for the numerical solution of more realistic PDE models with finite element methods. The PDE is degenerate and nonlinear, meaning that even obtaining a numerical solution is much more difficult, and standard techniques for a posteriori analysis cannot be readily applied. If error bounds can be derived, the model will then be tested with mesh adaptivity on data collected during our visit to CEMADEN in Brazil in August 2018. The aim is to create a model to efficiently simulate conditions in the soil so that the team at CEMADEN can use this to inform their work, in which they are responsible for issuing warnings if they believe a landslide is imminent.
As the research progresses, we hope to work more closely with CEMADEN to both use data that they collect and try to adapt our work towards their specific needs in landslide prediction, with the end goal being to provide an accurate and efficient model, informed by the needs of the users.

Supervisors: Peter Jan van Leeuwen (lead supervisor, Department of Meteorology, University of Reading), Jennifer Scott (Department of Mathematics and Statistics, University of Reading), Amos Lawless (Department of Mathematics and Statistics, University of Reading).
Summary of the MRes project: Geophysical systems can be characterised as high-dimensional, nonlinear, with complex feedbacks among a multitude of scales. Understanding the working of these systems, and predicting their future behaviour is a huge challenge. Solid progress has been made through analytical analysis, but computer simulations are an essential ingredient for research and predictions. Unfortunately, these simulations tend to drift quickly and strongly from reality.
Incorporating observational information in these models via data assimilation would allow us to study the true evolution of the system in unprecedented detail, and provide accurate forecasts. Data-assimilation is used routinely for numerical weather forecasting.
The main workhorse is 4DVar, a variational method that tries to find a best trajectory over a certain assimilation window, typically of 6 to 12 hours. The main bottlenecks of this method are the difficulty to make the computations parallel, the inability to make the assimilation windows longer because of the chaotic nature of the atmosphere, and the difficulty in obtaining proper uncertainty estimates.
A solution to the first two problems is to allow for model errors in the data-assimilation framework. This will allow for parallelisation and reduce the strong dependence to initial conditions, making the problem less nonlinear. The last problem can be addressed via an ensemble of 4DVars. A natural way is to treat each 4DVar as a draw from a proposal density in a particle filter.
We propose to investigate efficient solution methods for this minimisation problem, also exploring the fact that similar problems have to be solved in parallel for the different particles in the particle filter. If successful this would not only be a significant step forward in particle filtering, but also lay a solid foundation for the present ensemble methodology used by ECMWF and the Met Office, potentially leading to large improvements in weather forecasting.
This PhD project will provide strong mathematical foundations for this new class of minimisation problems in high-dimensional systems with the aim to make them robust for practical applications. The work will consist of both mathematical explorations, such as convergence proofs and developing the methods further, and numerical experiments on medium to high-dimensional systems, with the aim to eventually reach out to operational data-assimilation practice. For the data-assimilation experiments we will make use of the Parallel Data Assimilation Framework (PDAF) software, developed at AWI and now being incorporated into NCEO plans, which allows different data-assimilation algorithms to be tested on a range of models.

Supervisors: Dr. David Ferreira (Lead supervisor) and Dr. Maarten Ambaum (University of Reading, Meteorology), Dr. Valerio Lucarini (University of Reading, Mathematics)
Summary: The Southern Ocean is remote in location, but plays an important role in the global climate system: for example, it absorbs up to 75% of heat and up to 45% of carbon produced by human activity. As observations show that winds blowing over the Southern Ocean (which drive its circulation) have strengthened and shifted poleward in the past few decades, it is natural to ask whether it will continue absorbing heat and CO2 at the same rate. Furthermore, an increase of sea ice cover in the Southern Ocean has been observed during the same period of time, in stark contrast with the decreasing trend observed in the Northern Hemisphere, and this effect is attributed to wind stress modification too. Coupled ocean-atmosphere global circulation models, however, predict that this tendency will invert in the future, but the typical time-scale of the process is still a matter of debate. For these reasons, understanding and constraining the intensity and the time scales of the response of the Southern Ocean is a crucial topic in research.
A wide range of studies have investigated the equilibrium response of the Southern Ocean to wind changes, revealing that its sensitivity is significantly damped by interactions between mesoscale eddies (i.e. turbulent motion) and mean flow. However, a few recent works have also shown that the response actually comprises a superposition of multiple timescales, ranging from one month to more than a decade. Therefore, to capture past and future decadal trends it is essential to consider the transient adjustment of the Southern Ocean, and not just its equilibrium response.
The aim of my project is to discuss the physical processes and time scales involved in the transient response rigorously. From a physical perspective, the eddy-mean flow interaction may be described as a nonlinear oscillatory dynamical system which has already been successfully employed to study storm track variability. All existing models of the Southern Ocean response are linear, but it is vital to explore the nonlinear regime too. This will be used to complement and guide the interpretation of numerical step-change experiments conducted with a high resolution global circulation model. Finally, response theory and nonequilibrium statistical mechanics are powerful tools to investigate the response of a complex climate system to modifications of a forcing parameter, and I plan to extend this approach to the dynamics of the Southern Ocean

MPE CDT Aligned student
Supervisors: Valerio Lucarini (lead supervisor) and Tobias Kuna
Project Abstract: In this project I’m going to investigate extreme events in simplified atmospheric models of the mid-latitudes using the point of view of Extreme Value Theory (EVT; Coles 2001). The idea here is to extend the work Felici et al. (2007a, 2007b), where it was first shown that EVT can be used to look at extremes generated by an atmospheric model, going beyond the diagnostic analysis, and taking advantage of the theoretical framework presented in Lucarini et al. (2016). I’m going to investigate the properties of extremes of observables where different levels of spatial and temporal coarse graining procedures are performed, so to understand the effect of averaging on our estimates of extremes. Additionally, statistics of extremes of coarse grained fields will be compared with what obtained running models with coarser resolution. Finally, I will investigate the response of the extremes to both time-independent and dependent perturbations affecting the dynamics, using response theory and pullback attractors. Throughout this work both deterministic and stochastic perturbations will be investigated, and results will be used for model error assessment and analysis of multiscale effects.
As a practical application, this work will lead to the definition of functions describing societal and economic impact of extreme climatic events, along with financial and insurance tools able to manage time-dependent risk assessment.

Supervisors: Prof. Emily Black (Lead supervisor, Department of Meteorology, University of Reading), Dr. Claudia Neves (Department of Mathematics and Statistics, University of Reading)
Farmers in Africa are highly vulnerable to variability in the weather. Robust and timely information on risk can enable farmers to take action to improve yield. Ultimately, access to effective early warning improves global food security. Such information also forms the basis of financial instruments, such as drought insurance. Monitoring weather conditions is, however, difficult in Africa because of the heteorogeneity of the climate, and the sparcity of the ground-observing network. Remotely sensed data (for example satellite-based rainfall estimates) are an alternative to ground observations – but only if the algorithms have skill across the whole rainfall distribution; and if the rainfall estimates are integrated into effective decision support frameworks. Current satellite-based rainfall works well for estimating occurrence and low and medium intensity rainfall, but has low rarely estimate heavy rainfall.
Rainfall is often assumed to come from a gamma distribution, which fits well to the low and mid intensity rainfall, but underestimates the probability of heavy rainfall. To more accurately model the tails, we apply the method of “Extreme value statistics”, using both the “Block maxima” and “Peak-over-threshold” method.With this method, only the largest values in the data are used, which makes it suitable for modelling changes in the most adverse events due to, for example, climate change.
In this project we will assess the effect of climate change on the likelihood of extreme rainfall/temperature events in Africa, and subsequently of adverse agricultural outcomes. We will do so by modelling the probability distributions of gauge observations TAMSAT V3 and reanalysis data, with a focus on return periods for extreme rainfall and assessment of uncertainties in return periods for extreme rainfall.This analysis will also be extended to modelled datasets for the historical period, including the new ultra-high resolution (~4km horizontal resolution) CP4Africa and other high resolution data as well as CMIP5. In addition to evaluating the behaviour and representation of heavy rainfall in these dataset, bivariate extreme analysis of temperature and precipitation will be conducted to evaluate the effect of a warming climate on precipitation. The results will be applied to TAMSAT calibration algorithms and to improve climatologies for the TAMSAT-ALERT risk assessments.

MPE CDT Aligned student
Supervisor: Darryl Holm
Research Interests: Point Vortex Dynamics, Turbulence, Geophysical Fluid Dynamics, Stochastic Analysis, Symmetry and Reduction in Hamiltonian and Lagrangian Systems, Differential Geometry
Research Project: My current research ideas lie at the intersection of 2D point vortex dynamics and geometric mechanics. Firstly, my idea is to explore a stochastic theory of the motion of point vortices based on the recent work of Holm (2015) on deriving stochastic fluid models using techniques from geometric mechanics, in order to help understand the phenomena of vortex crystal relaxation in 2D turbulence of inviscid fluids. Vortex crystal formation has been observed repeatedly in experiments on magnetized electron columns, which is governed by the same equations as ideal fluids, and in numerical simulations of point vortices, but their formation process is not completely understood. Modelling the weak background vortices as noise may help us give insight to this formation process. Secondly, I am thinking about controlling the motion of point vortices on a curved surface (sphere, paraboloid, etc) by rigid body motion. This can be seen as a generalisation of for instance the motion of point vortices on a rotating sphere.

Supervisors: Darryl Holm (Lead Supervisor, Department of Mathematics, Imperial College London), Dan Crisan (Department of Mathematics, Imperial College London), Colin Cotter (Department of Mathematics, Imperial College London)
Summary: Weather and ocean prediction requires solving the equations of fluid dynamics. However, our incomplete understanding of turbulence and other subgridscale effects, the chaotic nature of these equations as well as the changing climate are several factors that make solving these equations incredibly difficult. By means of introducing stochastic transport noise [Holm2015] in the equations of geophysical fluid dynamics we will try to improve weather forecasting, but more importantly also provide a proper estimate of the uncertainty in the forecasts.

MPE CDT Aligned student
Research project: Vortices are near-ubiquitous geophysical and astrophysical phenomena. The study of vortices which occur in Earth’s oceans is crucial to our understanding of oceanic currents and climate, in part because the majority of kinetic energy of the ocean is contained within mesoscale vortices, but our fluid dynamical understanding of them is constrained both experimentally and practically. This research project aims to model turbulence in quasigeostrophy and describe, kinetically and statistically, the vortex population which naturally arises. An understanding of vortices modelled in this way should provide insight into the population of vortices which arises due to similar flow regimes in the ocean.

MPE CDT Aligned student
Research project: My research will focus on variational data assimilation schemes where we aim to approximately minimize a function of the residuals of a nonlinear least-squares problem by using newly developed, advanced numerical optimization methods. As the function usually depends on millions of variables, solving such problems can be time consuming and computationally expensive. A possible application of the method would be to estimate the initial conditions for a weather forecast. Weather forecasting has a short time window (the forecast will no longer be useful after the weather event occurs) and so it is important to choose a method which gives the most optimal solution in the given time. This is why the analysis of the efficiency of new techniques is of interest. In summary, the aim of my PhD research is to apply the latest mathematical advances in optimization in order to improve the forecast made by environmental models whilst keeping computational cost and calculation time to a minimum.

MPE CDT Aligned student
Research project: In Neuroscience, mean-field models are nonlinear dynamical systems that are used to describe the evolution of mean neural population activity, within a given brain region such as the cortex. Mean-field models typically contain 10-100 unknown parameters, and receive high-dimensional noisy input from other brain regions. The goal of my PhD is to develop statistical methodology for inferring mechanistic parameters in this type of differential equation model.

MPE CDT Aligned student
Research project: Fast, approximate methods for electromagnetic wave scattering by complex ice crystals and snowflakes. The goal of my PhD is to develop a method to approximate the scattering properties of ice particles in clouds. This could be used to improve scattering models that are available, and therefore allow more precise retrievals of ice cloud properties. These retrievals could be used to evaluate model-simulated clouds and identify problems that exist in the model, thus enabling improvements to be made to the parameterization of ice processes.

MPE CDT Aligned student
Research project: Next-generation atmospheric models are designed to be more flexible than previous models, so that the choice of mesh and choices of numerical schemes can be deferred or changed during operation (Ford et al., 2013; Theurich et al., 2015). My PhD project seeks to make numerical weather and climate predictions more accurate by developing new meshes and numerical schemes that are suitable for next-generation models. In particular, the project addresses the modelling of orographic flows on arbitrary meshes, focusing on three aspects: first, how orography is best represented by a mesh; second, how to accurately advect quantities over orography and, third, how to avoid unphysical solutions in the vertical balance between pressure and temperature.

MPE CDT Aligned student
Research project: Atmospheric convection occurs on length scales far smaller than the grid scales of numerical weather prediction and climate models. However, as the resolution of modern models continues to increase, the local convective effects become evermore significant and, thus, there is a demand for new convection schemes which can produce accurate results at these new scales. One such candidate is through conditional averaging, an approach in which grid boxes are split into convectively stable and unstable regions where separate differential equations are solved for each. The scheme incorporates mass transport by convection and memory which are features often ignored in current models. There is thus a possibility to better represent convection using this new approach.

MPE CDT Aligned student
Research project: my research is divided into two closely related parts. In the first part, I consider two-dimensional models of stochastically driven limit cycles, which are used to describe oceanic weather fluctuations, and study the effect of the interaction between the noise excitation and a phase amplitude coupling, also called shear. I can show that for a certain class of such models there is a transition from noise-induced synchronisation to noise-induced chaos depending on the level of shear.
The synchronisation means convergence to one trajectory for all initial conditions, given a certain noise realisation. The chaos is measured by a positive stability exponent, the Lyapunov exponent, positive entropy and certain properties of the invariant random measure. In models without shear, we can show that noise destroys the bifurcation from equilibria to limit cycles, i.e. Hopf bifurcations, with respect to the attracting objects but that the bifurcation still manifests itself in terms of loss of hyperbolicity.
The second part consists in comparing such bifurcation phenomena for unbounded noise with scenarios for bounded noise. The first approach is to look at killed processes, i.e. to study only trajectories that survive on a bounded domain for potentially unbounded noise. The second approach is to model the bounded noise as a function of a fast chaotic variable perturbing the slow variable where the noise converges to Brownian motion in the scaling limit. I can use recent important results concerning stochastic limits of fast-slow systems.

MPE CDT Aligned student
Supervisor: Pavel Berloff (Imperial College London, Department of Mathematics)
Research project: Alternating jets have been well observed in Earth's oceans and planetary atmospheres like Jupiter. These are elongated structures in the zonal velocity contours, which can be few hundreds to thousands of kilometres long and can stay for a time period of few weeks to months. There are many mechanisms proposed to explain the phenomenon however not all characteristics of these jets are explained and the reality is rather more complex. The aim of my project is to understand the phenomenon using more complex mathematical models, which incorporate real world parameters. In particular, I am studying what effects does bottom topography have on the stability of the jets

MPE CDT Aligned student
Research project: Multilevel ensemble data assimilation
PhD project summary: Uncertainty appears in a variety of forms in climate and weather models. Many climate and weather models operate on low grid resolutions (which leaves margin for error from sub-grid processes) or have a severe lack of data to guide them to accurate parameter estimates or initial conditions. Ensemble methods aim to incorporate this uncertainty into models to produce high-accuracy estimates by averaging over an ensemble of random (or pseudo-random) model realisations - Monte Carlo estimation. However, to reach the accuracy that most climate and weather models require, an extraordinary amount of realisations of the model have to be computed, and this is very computationally expensive. There is now an alternative: the novel multi-level Monte Carlo method (Giles [2006]). The multi-level Monte Carlo method produces an optimal combination of many low-accuracy samples and few high-accuracy samples to reduce computational cost and variance within the model estimator. Advanced weather and climate ensemble prediction requires high-dimensional spatiotemporal models to be solved accurately, at a low computational cost, and to incorporate a large range of data assimilation techniques - this PhD attempts to achieve this using an adaptation of the multi-level Monte Carlo method.

Supervisors: Richard Everitt (Lead Supervisor, Department of Mathematics and Statistics, University of Reading), Richard Sibly (School of Biological Sciences, University of Reading) and Robert Thorpe (Centre for Environment, Fisheries & Aquaculture Science (Cefas))
Summary: individual-based models (IBMs) are used to simulate the actions of individual animals as they interact with one another and the landscape in which they live. IBMs are also known as Agent Based Models, and have a history dating back to the 1940s. The first models were types of cellular automata, which dynamically model the states (e.g. black or white) of a grid of cells through the application of rules that act locally (e.g. the state of a cell at a particular time depends on the state of its neighbours at the previous time). Conway’s “Game of Life” is the most widely known example of such a system. Now, the term IBM is used for any model in which a global system is modelled through local interactions; these systems play a big role in “Complexity Science”. The study of such systems often focuses on whether useful global-scale emergent behaviour arises through modelling on the local-scale. However, this project is less focussed on the characteristics of the forward model, and more on solving the inverse problem of parameter estimation.
When used in spatially-explicit landscapes IBMs can show how populations change in response to climate change or management actions. For instance, IBMs are being used to design strategies of conservation and of the exploitation of fisheries given the likely effects of climate change on fish species. Stochastic computer simulation models are often the only practical way of answering such questions relating to climate change and ecological management. However, due to their complexity, such models are difficult to calibrate and evaluate: there is an urgent need for improved methodology for performing these tasks, since existing methods are too slow, and not always accurate. This project aims to improve the best existing method: Approximate Bayesian
Computation (ABC). ABC is currently being used at Reading for statistical inference in a diverse range of applications in ecology, evolution and more widely. Examples include models of mackerel in the North East Atlantic, elephants in Amboseli National Park in Kenya, and local butterfly populations. These projects are investigating the likely impact of climate change. Other projects are studying cockles in the Burry Inlet; the evolution of pathogens; social network analysis; and statistical physics. In most of these cases the challenges of parameter estimation and model comparison are both of importance, but implementation can prove computationally expensive. This project aims to improve ABC methods and to collaborate with model builders to help them in fitting models to data. Initial focus will be on IBMs developed for fisheries management by Cefas, part of the UK government, where predicting the likely effects of climate change is of central importance. ABC works by comparing model outputs with data and is particularly useful for statistical inference where the model is only available in the form of a computer simulator such as an IBM. Ideal Bayesian methods produce a multivariate posterior distribution over the parameters. This posterior distribution specifies the degree of support for different parameter vectors given the model, data and prior knowledge about the values the parameters are likely to take. Identifying the exact posterior is not always feasible, leading to the development of approximate Bayesian methods, such as ABC. ABC outputs a sample of parameter vectors, which constitute an approximation to the posterior distribution. This sample is found through intelligently searching through the space of possible parameters, finding parameters that yield simulation outputs similar to real data. Originally developed within population genetics, ABC is now widely used, with recent applications to, for example, range expansions, emerging infectious diseases, and forest dynamics. Despite its successes, there are situations in which most currently available ABC methodologies are not computationally feasible: in cases when either the parameter space is high dimensional, or when the simulator is computationally expensive. In previous work on IBMs we have also observed limitations of ABC techniques on models that do not provide an ideal fit to data. The overarching aim of this project is to improve ABC methods to make them sufficiently fast and accurate that they can be widely used to evaluate the likely effects of extreme weather and climate change.

Supervisors: Colin Cotter (Lead Supervisor, Department of Mathematics, Imperial College London), Tommaso Benacchio (Met Office), Werner Bauer (Department of Mathematics, Imperial College London), Jemma Shipton (Imperial College London, Department of Mathematics)
Summary: compatible nite element methods have been recently proposed as a flexible discretisation for the dynamical core of weather and climate models. This flexibility allows to use more general types of grids that avoid the parallel computing scalability bottlenecks associated with the latitude-longitude grid. They are currently being developed for a new dynamical core at the Met Offce, within the Gung Ho project. In this PhD project we investigate conservation properties of these discretisations, and approximate systems of equations that filter out sound waves, for example.

Supervisors: Sheehan Olver (Lead Supervisor, Department of Mathematics, Imperial College London) and Colin Cotter (Department of Mathematics, Imperial College London)
Summary of PhD project: The goal of this project is to investigate sparse spectral methods on triangulations of the sphere as an alternative to the spherical harmonics approach currently in use at the European Centre for Medium-range Weather Forecasts in their weather and climate model. The aim would be to preserve the predictive skill of the current approach whilst avoiding the parallel scalability bottleneck from the global spectral transform, which is expected to inhibit the future performance of the ECMWF model.

Supervisors: Danica Vukadinovic Greetham (Lead Supervisor, Department of Mathematics and Statistics, University of Reading), Claudia Neves (Department of Mathematics and Statistics, University of Reading)
Industrial Partner: Dr. Maciej Fila (Scottish and Southern Electricity Networks, Reading)
Summary: electric load forecasts are extremely important for both industry and society as they are hugely informative in various decision making processes. For example, in the utility industry, electric load forecasts are used to make decisions about energy trading, pricing and generation. As the electricity distribution from renewable sources increases and heat and transport move toward electrication, individual forecasts become more important as they can inform, both at the national and community level, if and by how much the infrastructure should be upgraded. Most studies in electric load forecasting in the past century have focused on point load forecasting. However, in the most recent decade, more researchers have delved into providing probabilistic load forecasts (PLF) as business needs and electricity demand and generation evolve. Hong and Fan (2016) provided an in depth literature review outlining various multiple linear regression, machine learning and graph theory based forecasts and various ways to validate a forecasts such as p norm error. While most of these forecasts reasonably well in terms of averages, they tend to smooth out much of the peakiness of the actual electric load. To the best of our knowledge, no studies quantifying the extremal behaviour of the underlying probability distribution of the load have been conducted. For most electricity providers it is advantageous and indeed valuable to be able to forecast and prepare for spikes in demand particularly when they occur simultaneously in multiple households. Engaging in this kind of study, which enables electricity distributors to generate better electricity load forecasts, will enhance capacity building for innovation in the management of electricity grid, being able not only to implement timely infrastructure development but also to minimise resource consumption and thus reduce greenhouse gas emissions. Reciprocally, in energy trading, load determines the instantaneous unit price of electricity. Thus, being able to forecasts peaks accurately both in time and magnitude allows to maximise prots in the energy market.
This project will take a two-pronged approach. One of the two prongs will look to improving individual electric load forecasts in general by adding more features such as renewable energy integration, temperature, demographics, clustering. The other will adapt existing methodology in extreme value analysis to the data and application at hand. The project will bring the two together by using concepts of heteroscedasticity, error analysis and forecast validation. It will also include parametric estimators which describe the dependency between meteorological weather telltale signs such as rainfall and temperature and thus describe extreme behaviour of electricity loads under various conditions.

Supervisors: Ted Shepherd (Lead Supervisor, Department of Meteorology, University of Reading), Sebastian Reich (Potsdam University and Department of Mathematics and Statistics at the University of Reading) and Martin Leutbecher (ECMWF)
Summary: it is a well-accepted fact in dynamic meteorology that the chaotic nature of atmospheric dynamics imposes an inherent finite limit of predictability (the celebrated ‘butterfly effect’). The quantification of forecast uncertainty is thus an important question. A mathematical theory behind this fact was first treated rigorously by Lorenz (1969), who used a perfect model to estimate the growth of initial errors.
However, atmospheric models cannot be perfect. Phenomena beyond the truncation scale have to be parameterised, leading to unavoidable model errors in the forecasts. An important class of such small-scale phenomena is convection. Increasing the horizontal resolution to allow explicit convection may therefore change the role of model error in the uncertainty of forecasts. The ultimate goal of the project is to study error growth properties of such a convection-permitting version of the ECMWF’s Integrated Forecasting System and compare them with the lower-resolution, convection-parameterised runs. Implications for ensemble forecasting will also be studied.
As an intermediate step, I have already involved in a numerical investigation of the dependence of error growth behaviour on initial and model errors in the context of the idealised α-turbulence model in my MRes work. This helps develop understanding and provides insights into diagnostic quantities of error growth in the full model. The PhD project shall then expand on this by studying the relevant mathematical aspects (Lyapunov exponents, closure theory and model attractors) and extend the conclusions to the ECMWF model.
The project is expected to help provide guidance on the importance of representing initial and model errors of current numerical weather prediction (NWP) ensembles at scales in the dissipation range (i.e. less than ~100 km), and on how important it is to correctly model the background energy spectrum in the atmosphere in order to realistically simulate error growth in NWP ensembles.

Supervisors: David Ferreira (Lead Supervisor, Department of Meteorology, University of Reading) and Tobias Kuna (Department of Mathematics and Statistics, University of Reading).
Summary: for the last 3 million years, Earth climate has been oscillating between interglacial states (like today’s climate) and glacial states (when ice sheets covered North America and Scandinavia). Various observations establish a statistical link between the Glacial-Interglacial Cycles (GIC) and the Milankovitch cycles, the millennial oscillations of Earth’s orbital parameters - eccentricity (100 kyr), obliquity (41 kyr), and precession (23 kyr) that perturb the incoming solar radiation on Earth. However, we do not have a well-established theory for this link, revealing a critical gap in our understanding of the climate system and raising questions on our ability to predict its future evolution. Paillard (2015) recently reviewed the existing theories for the GIC found in the literature, revealing the lack of consensus and even the absence of a leading hypothesis.
In the 80s, Nicolis, Benzi and collaborators proposed a novel hypothesis: Earth’s climate is in stochastic resonance (SR) with the Milankovitch cycles (e.g. Benzi, 2010). This requires that Earth’s climate is endowed with multiple stable states. Under this condition, matching between the characteristic timescale of unforced transitions (Kramers rate, controlled by the noise) and the external forcing timescale (Milankovitch) could generate regular transitions between the two states, even for a very weak forcing (i.e. too weak to force transitions deterministically). Though very appealing, the application of this idea ran into two obstacles:
1) complex ocean-atmosphere General Circulation Models (GCM) (and so the real Earth one concludes) did not exhibit multiple stable states and 2) the stable states of classical low-order models were not representative of the observed GIC. Ferreira et al. (2011), Rose et al. (2013) and Ferreira et al. (2017) (in revision) made significant progress in this direction: firstly they have found multiple equilibria in a coupled GCM, secondly they have showed that the climate shift between states is comparable to that of the GIC and finally they have developed a 1D Energy Balanced Model (EBM) that mimics the behavior of the coupled GCM. There are a few applications of stochastic resonance to climate problems, however most of these are concerned with Dansgaard-Oeschger events during glacial times, rather than the GIC themselves. For GIC applications, one has to go back to the early work of Nicolis and Nicolis (1981) and Benzi (1982). However, they used 0Dmodels, which were physically unrealistic or employed ad-hoc formulations (i.e. these models did not exhibit stable states similar to GIC nor did they include feedback of the climate system).
The goal of this PhD is to revisit a stochastic resonance theory for the GIC in the light of these new results mentioned above. This will be the first attempt to apply SR to a 2-layer 1D EBM models which exhibits multiple stable states similar to GIC. The latter point is crucial as it addresses the caveats of earlier studies and will allow us to make connections with the observations and engage with the paleoclimate community, raising the potential impact of this work.

Supervisors: John Methven (Lead Supervisor, Department of Meteorology, University of Reading), Hilary Weller (Department of Meteorology, University of Reading) and Tristan Pryer (Department of Mathematics and Statistics, University of Reading)
Prof. Sir Brian Hoskins FRS (Department of Meteorology, University
of Reading), Secondary- Dr. Apostolos Voulgarakis (Department of Physics, Imperial
College London), Dr. Richard Everitt (Department of Maths, University of Reading)
Summary: the overall global response of the atmosphere to different forcings has been researched substantially, but global and regional responses of region-specific emissions have not been studied in as much depth. Long-lived pollutants (e.g. greenhouse gases (GHGs) such as CO2) become evenly spread over the atmosphere and cause homogeneous forcing worldwide. In contrast, short-lived pollutants, such as aerosols (e.g. sulphate, black carbon), and hence their associated forcing, are distributed inhomogeneously. Short-lived pollutants are important for decadal climate prediction as they have a strong impact on
climate on such time-scales, while their sustained emissions changes can lead to implications for future peak temperatures under GHG mitigation scenarios and how those scale for different regions of interest. The focus of the majority of previous studies has been on the mid-latitudes where most of the developed countries lie (e.g. Europe, U.S., China and East Asia). However, anthropogenic emissions from the tropics are likely to also change substantially in the near-future, due to the rapidly developing countries e.g. in Africa, so more research in this area is needed to predict future global and regional climate response to those changing forcings. Additionally, emissions from fire activity in the tropics can simultaneously impact largescale climate in ways that are largely unexplored. The primary aim of the project is to reduce a complex, computationally demanding climate model down to a statistical model which describes regional patterns in the climate response to regional forcings. Secondly, investigations of the physical mechanisms that link regional forcings with the global and regional responses will be pursued, such as for example the interactions of forcings with internal large-scale climate oscillations.

Supervisors: Jochen Broecker (Lead Supervisor, Department of Mathematics and Statistics, University of Reading), Martin Rasmussen (Department of Mathematics, Imperial College London), Tobias Kuna (Department of Mathematics and Statistics, University of Reading), Valerio Lucarini (Department of Mathematics and Statistics, University of Reading)
Summary: in the Climate Sciences, there is enormous interest in understanding the long term average behaviour of the climate system. In the context of climate models, this behaviour is expressed intrinsically through concepts as invariant measures, attractors, Lyapunov vectors and Lyapunov exponents, or more generally pullback attractors in the presence of time dependent forcing (deterministic or stochastic). Further, the response of these objects to changes in external parameters is important as this provides a mathematical framework in which climate change triggered by a change in external parameters or forcings can be investigated. Therefore, by studying the ergodic properties and response theory for models in geophysical fluid dynamics, we expect not only a better understanding of the dynamical effects of climate change in general but also whether statistical properties derived under current climate conditions, such as downscaling approaches or forecast quality assessments, will be valid under future climates. As opposed to conceptual (low dimensional) climate models, these questions will be addressed in the context of more realistic atmospheric and ocean models such as two dimensional Navier Stokes, classical two layer quasigeostrophic (QG) models or three layer QG models where the third layer represents an ocean.
The main goal of this project is to investigate the ergodic properties of medium complexity QG models, in particular attractors, Lyapunov exponents, and pullback attractors (in the nonautonomous case). Further, the response of such models to changes in parameters will be considered (with and without stochastic forcing). In particular, we aim to establish results regarding response to time dependent perturbations, thereby putting important techniques
in the climate sciences on a rigorous footing. These questions will be addressed in the context of 2d Navier Stokes but also two layer and three layer QG models. As part of this, we expect that some of the following results available for 2d Navier Stokes be established for these more general models (potentially in modified form):
1. Finite dimensional attractors under constant forcing
2. Local Lyapunov exponents under constant forcing (accompagnied by numerical experiments)
3. Pullback attractors under time dependent deterministic and random forcing
4. Ergodic invariant measures under random forcing
Invariant measures for nonautonomous dynamical systems have not been analysed so far in any depth. The project contribute to the theory for measures on pullback attractors (with response to time dependent forcing as part of this). Depending on time and progress, the project will also look into bifurcations of such models, since bifurcations essentially imply the break down of linear response theory.

Supervisors: Maarten van Reeuwijk (Lead Supervisor, Department of Civil and Environmental Engineering, Imperial College London), Colin Cotter ( Department of Mathematics, Imperial College London) and Gabriel Rooney (Met Office)
Summary: TOne of the complexities of modelling urban climate comes from the heterogeneity of urban areas. Densely mixed building units of various sizes and shapes, complex road networks, paved surfaces, water and urban vegetation all interact with each other and the atmospheric boundary layer. The present study investigates the role of urban surface heterogeneity on mean wind flow and vertical momentum transport over idealised urban areas. Large-Eddy Simulations (LES) with the DALES-Urban model are used to compare mean flow and momentum transport of several simulation setups with similar building density and frontal aspect ratio, but varying levels of complexity of buildings and street geometry. We compare simulations with different amounts of buildings, uniform and heterogeneous building heights, and varying building plan area and street geometry.

Supervisors: Xuesong Wu (Lead Supervisor, Department of Mathematics, Imperial College London) and John Methven (Department of Meteorology, University of Reading)
Summary: weather and climate are fundamentally underpinned by complex processes taking place in the troposphere and stratosphere. The troposphere is extremely active, and there a variety of atmospheric waves are generated through dierent mechanisms including topographic forcing, heat sources and shear instabilities. These waves propagate upward to influence the stratosphere. Primarily, the momenta carried by waves are deposited there thereby changing the zonal mean flow and causing interannual variability. However, rather than being passive, the stratosphere affects also troposphere dynamics, that is, there is a two-way coupling between the stratosphere and troposphere. It is now recognized that understanding and accounting for this coupling are crucial for improving weather and climate predictions. Among numerous mechanisms proposed, an interesting and important one is the mutual interactions through Rossby waves: the troposphere radiates Rossby waves, which are reflected by the stratosphere to the troposphere simultaneously influuencing the radiation itself. Strong evidence for this has been provided by careful analysis of observation data.
The aim of the present project is (a) to investigate rst the key fundamental aspects in troposphere-stratosphere interactions, namely, generation, reflection and breakdown of Rossby waves as well as their back action on the troposphere, (b) to integrate these constituting fundamental processes in a unied framework, thereby constructing a self-consistent physics-based model for troposphere-stratosphere coupling, and (c) to diagnose how processes such as Rossby wave reflection and its interference with upwards radiation influence the predictability of the flows in stratosphere and troposphere. Troposphere-stratosphere coupling involves fluid physics on dierent scales, and it will be tackled by the mathematical tools of matched asymptotic expansion and multiple-scale techniques.

Supervisors: Matthew Piggott (Lead Supervisor, Department of Earth Science & Engineering, Imperial College London), David Ham (Department of Mathematics, Imperial College London) and Hilary Weller (Department of Meteorology, University of Reading)
Summary: mesh adaptivity centres around the manipulation of the computational meshes used in solving differential equations in such a way that refinement is made where the error in the solution is estimated to be relatively large and coarsening is made where it is small. As such, it is possible to optimise the quality of the solution obtained, within bounds on mesh size enforced by computational constraints, thereby obtaining both an accurate and efficiently-computed solution to the problem at hand. Efficiently achieving an accurate solution is an important factor in various fields related to the Mathematics of Planet Earth,
including tracking the dispersal of oil spills, storm surge modelling and the simulation of ring formations such as the Agulhas current the South Atlantic.
Mesh adaption is traditionally achieved by either mesh refinement (h-adaptivity) or mesh relocation (r-adaptivity), the two of which have not yet been successfully coupled in such a way that they do not work in conflict with one another. This PhD project will seek to combine the two approaches to form a hybrid (hr) approach, allowing for both refinement and relocation to be implemented together. Through exploring the avenues opened by mesh adaptivity, there is plenty of scope for the advancement of computational models in the Planet Earth related application areas listed, amongst many more.

Supervisors: Arnaud Czaja (Lead Supervisor, Department of Physics, Imperial College London), David Ferreira (Department of Meteorology, University of Reading) and Greg Pavliotis (Department of Mathematics, Imperial College London)
Summary: a compilation of hydrographic surveys over the 20th century and the recent development of the ARGO float program have revealed significant variability of the oceanic heat content in all basins. Some of these changes must be related to the heat-content increase in response to the accumulation of CO2 and other long-lived greenhouse gases in the atmosphere, but some must also reflect fluctuations intrinsic to the coupled ocean-atmosphere-cryosphere
system. Due to limited observations and the costly computing requirements to study deep ocean heat uptake, our understanding of these fluctuations is poor.
The goal of this PhD project is to step back and conduct a process study of heat content change, or heat uptake, by different elements of the global ocean circulation: wind driven gyres (the global ocean), buoyancy-forced overturning circulation in a narrow basin (North Atlantic), and wind- and buoyancy-forced circulation in a channel geometry (Southern Ocean). The aim is to develop simple (linear) models of heat uptake i.e. models in which knowledge of the pre-industrial oceanic state allows for a prediction of heat uptake in response to anthropogenic forcing. This prediction will then be applied and compared to
observations.

Supervisors:
Pavel Berloff (Imperial College London, Department of Mathematics)
David Ferreira (University of Reading, Department of Mathematics and Statistics)
PhD project summary: This Project is about understanding fluid dynamics of the Gulfstream, which is powerful and important ocean current. The novelty is in terms of extending existing theories from multi-layer quasigeostrophic to multi-layer shallow-water approximation. The challenge is (a) in computing eddy-resolving multi-layer shallow-water solutions of the midlatitude ocean gyres with the western boundary currents and their eastward jet extensions, and (b) in interpreting these solutions with a theory of the nonlinear eddy backscatter. The latter requires mathematical analyses of the ocean circulation responses to transient forcing functions that represent mesoscale eddy stresses.
The mighty Gulf Stream current originates in the Caribbean basin, then follows the eastern coast of the United States up to the Cape Hatteras, where it separates from the coast. After separation, the Gulf Stream continues as the north-eastward jet extension that carries relatively warm water across the North Atlantic towards Europe. The Gulf Stream can be viewed as a giant, ribbon-like ``river in the ocean'' that meanders, sheds enormous vortices called ``rings'', radiates complicated planetary waves and stirs the North Atlantic water masses. Properties of the Gulf Stream flow system are heavily controlled by the Earth rotation and sphericity, by the water density distribution across the ocean, and by the shapes of the coasts and bottom topography. Because of all these factors, dynamical understanding of the Gulf Stream structure, spatio-temporal variability, transport and other properties remains vague, despite more than 50 years of vigorous research on this challenging topic.
The Gulf Stream problem involves important flow interactions across enormous range of scales, from hundreds of meters (internal gravity waves) to thousands of kilometers (ocean basin scale), and from hours to tens of years (meridional overturning and organized decadal variability). Such problem can be most efficiently dealt with by scientific computations guided by the real observations. Nowadays, theoretical research on the Gulf Stream dynamics clusters around 2 main directions. On the one hand, the primitive-equation (i.e., with the minimal number of approximations made) oceanic general circulation models are configured and run in realistic set-ups and eddy-resolving regimes to provide some pioneering solutions. Unfortunately, these solutions are so expensive, that they can not be used for systematic studies of the underlying dynamical processes. On the other hand, theoreticians studying the underlying processes tend to work with overly simplified dynamical models (e.g., the classical quasigeostrophic (QG) model ) that are good for representing some physical processes. Thus, the present situation is extremely beneficial for upgrading the existing theories of the Gulf Stream with more advanced but still fairly idealized models.
This Project will focus on isopycnal (i.e., multi-layer shallow-water) primitive-equation model with just a few layers of constant density --- this is perfect intermediate set-up between ``theoretical'' QG and ``comprehensive'' primitive-equation models. Proposed analyses will be organized into 2 interconnected work programs.
First work program will focus on high-Reynolds-number eddy-resolving solutions of the wind-and-buoyancy-driven ocean gyres, which include the idealized Gulfstream and its eastward jet extension with adjacent recirculation zones. The corresponding solutions will be analyzed from the following angles: flow decomposition into large-scale and eddy components; dynamical balances and nonlinear stresses; statistical description of the flow fluctuations: PDFs, spatio-temporal correlations, Empirical Orthogonal Functions, wavenumber-frequency spectra; nonlinear spectral interactions across scales; energy, enstrophy and potential vorticity analyses: production, propagation, conversion, dissipation; linear-dynamics analysis with local and nonlocal linearizations, stability, impulse responses.
Second work program will focus on fundamentals of the underlying eddy backscatter mechanism that supports the eastward jet and its recirculation zones. The central piece will involve dynamical analysis of the locally forced multi-layer shallow-water solutions represented by linear and nonlinear Green's functions, as well as extensions to various background flows and distributed stochastic forcings.

Supervisors:
Almut Veraart (Imperial College London, Department of Mathematics)
Olivier Feron (EDF R&D OSIRIS, Generation & Market Risk Management Department)
PhD project summary: Weather variables have a huge impact on energy generation based on renewables such as wind and solar which are getting increasingly important in the pursuit of sustainable economic growth. This project aims at constructing stochastic models for renewable energy production data based on appropriate weather variables which are tailored to operational planning and risk management purposes. In addition, efficient methods for statistical inference will be developed which will ensure the applicability of the new models. The scientific contributions of this project will encompass new stochastic models for wind and solar, methodological developments in statistical inference for such models as well as detailed empirical studies.
Climate change threatens the economic prosperity of future generations which makes it urgent to strive for sustainable economic growth, which is in fact one of the key priorities within the UK Sustainable Development Strategy which has been drawn up as a response to Agenda 21 of the United Nations. Mathematics and Statistics play a key role in tackling this challenge and can deliver the reliable tools for risk assessment, which are urgently needed.
The ultimate objective of this PhD project is to develop new technologies in stochastic modelling and statistical inference to reliably quantify risk and uncertainty related to renewable sources of energy production such as wind and solar. While there is a plethora of weather modelling and forecasting methodologies around, see e.g. for a recent survey, such models are typically not tailored to applications in operational decision making which limits their practical appeal in this context.
This project aims to tackle this challenge through a collaborative effort with EDF who will provide expert advice from the perspective of the world-leading power company.
The project will focus on developing statistical models for renewable sources of energy with a particular focus on wind and solar energy production data. Renewable sources of energy are often regarded as notoriously unreliable due to their highly random behaviour which is difficult to predict. However, in order to achieve sustainable economic growth, growing investments in renewable sources of energy are needed and the question big energy suppliers are facing is which investment decision will result in a reliable energy supply for the population while minimising risk at the same time. Given the current situation in Europe, it is particular interesting to find the ideal mix between wind and solar energy production. In order to help with such investment decision, energy providers need to know the corresponding forward prices of electricity and how they depend on the supply and the variations of renewable sources of energy. In this project we will hence focus on deriving suitable stochastic models for wind and solar (and potentially other renewables) which are characterised by two criteria: First, they will need to be flexible enough to accurately describe the random evolution over time of renewables, but at the same time, need to allow for efficient calibration. Second, they will need to be analytically tractable, such that forward prices of electricity generated from renewable sources of energy can be computed in explicit form, which will allow operational decision making based on such models.
Motivated by recent work, see e.g. [3], we will develop suitable continuous-time stochastic process models which can account for non-stationarities in terms of strong seasonal behaviour and trend, stochastic volatility and the existence of jumps. A first starting point will be the class of so-called multivariate volatility modulated Volterra processes, see e.g. [2] for the univariate case. The particular modelling challenge will be to allow for strong (negative) dependencies between wind and solar and also to find a suitable mechanism, which is likely to be via a regime-switching approach, which can model the impact of renewables on the corresponding electricity price.
When it comes to calibration, maximum likelihood estimation will be difficult in the rather complex models mentioned above. Hence we will rather focus on quasi-maximumlikelihood as well as EM-type estimation methods to deal with latent stochastic volatility and stochastic intensities in the underlying jump processes.

Supervisors:
Nick Jones (Imperial College London, Department of Mathematics)
Robert Ewers (Imperial College London, Department of Life Sciences)
PhD project summary: Public sentiment regarding climate change is an important factor in enhancing and slowing progress towards climate goals. Recent work suggests that in the US, climate change has actually had a beneficial effect with regard to the weather characteristics Americans perceive as important . Climate sentiment itself is a stochastic process running on a social network: large social media studies have observed climate
sentiment but the characteristics of the participants (GPS, socio-economic characteristics) are only partly observed and likewise the social network on which the sentiment dynamics unfolds. The student will investigate the control and forecasting of climate sentiment on spatially embedded networks with partly observed node-locations and unobserved links. Models where individuals are not only influenced by their neighbours but also by the climate field associated with their (partially observed) node-location will be considered. The work will help
create understanding of how climate sentiment unfolds and provide limits on how it can be influenced.

Supervisors:
Darryl Holm (Imperial College London, Department of Mathematics)
Colin Cotter (Imperial College London, Department of Mathematics)
Dan Crisan (Imperial College London Mathematics Department)
Valerio Lucarini (University of Reading, School of Mathematical and Physical Sciences)
Ted Shepherd (University of Reading, Department of Meteorology)
PhD project summary: In next-generation weather and climate models, stochastic parameterisation should be an important element in providing reliable estimates of model uncertainty. A fundamental conclusion of Berner, Jung & Palmer [2012] is that "a posteriori addition of stochasticity to an already tuned model is simply not satisfactory. This in turn suggests that stochasticity must be incorporated at a very basic level within the design of physical process parameterisationsand improvements to the dynamical core." This project will answer the challenge of a priori introduction of stochasticity for GFD models at various levels of approximation, by using the systematic approach of Holm [2015].

Supervisors:
Dan Crisan (Imperial College London, Department of Mathematics)
Peter Jan van Leeuwen (University of Reading, Department of Meteorology)
Roland Potthast (DWD)
PhD project summary: Climate change is one of the most challenging problems that we are currently facing, fundamental research in this area being, therefore, of crucial importance. Large-scale circulatory motions in oceans and atmosphere involve complex geophysical phenomena that have a determinant influence on climate dynamics. Although it seems impossible to model reality in its entire complexity, the introduction of stochasticity into ideal fluid dynamics appears as a realistic tool for modelling the sub-grid scale processes that cannot otherwise be resolved. We investigate a data assimilation problem for an infinite dimensional model reflecting the motion of an incompressible fluid below a free surface when the vertical length scale is much smaller than the horizontal one. From a mathematical point of view this approach involves a stochastic filtering problem with an infinite dimensional signal modelled by a stochastic partial differential equation, and a large, but finite, observation process. The signal describes the evolution of a rotating two-dimensional shallow water system with the primitive equations. Even if the model is highly simplified in the sense that it does not have the full stratification of the real atmosphere and it involves only a single layer of incompressible fluid, the motions that it supports have close analogues in the real atmosphere and ocean: it allows for processes like gravity and Rossby waves, eddy formation and geophysical turbulence. We will study the influence of missing physics via model noise, while at the same time restricting the evolution by conditioning it on observations of an underlying true system. The observations will be either Eulerian or Lagrangian. The objective of the research is to produce a quantitative and qualitative analysis of the posterior distribution of the state, given the data. The relevance of the results will be investigated by exploring first steps of their realization in an operational framework.

Supervisors:
Nick Bingham (Imperial College London, Department of Mathematics)
Nick Klingaman (University of Reading, Department of Meteorology),
Emma McCoy (Imperial College London, Department of Mathematics)
PhD project summary: Recent years have seen the (contested) emergence of a new flavour of El Nino, in the central Pacific. This has led to the suggestion that the atmosphere-ocean coupling in the tropical Pacific has changed: how can we tell? This project will utilise the sophisticated mathematical techniques of Functional Data Analysis to answer
questions about the El Nino Southern Oscillation (ENSO). In this set-up, data points are seen as observations of
a curve. Although this may appear to be a more complicated setting in which to work, we now have access to
powerful mathematical tools, improving our analysis. The aim of this project is to develop this theory further, and apply it to answer questions about ENSO and some related climate processes.

Supervisors:
Almut Veraart (Imperial College London, Department of Mathematics)
Ted Shepherd (University of Reading, Department of Meteorology)
PhD project summary: Multivariate spatio-temporal stochastic processes are of key importance in various applications, including in particular atmospheric and environmental sciences. The aim of this project is to develop the new class of so-called multivariate ambit fields (MAFs) which constitute a flexible, but yet analytically tractable class of multivariate random fields, and derive suitable stochastic simulation and statistical inference tools tailored to applications in the environmental sciences.
The first part of the project consists of deriving the theoretical properties of MAFs, which are defined via kernel smoothing of a multivariate volatility modulated Levy basis; see [BNBV2015, CK2015]. As such MAFs are generally non-Gaussian and allow for stochastic variability both in time and potentially in space. We will in particular study different approaches to parameterising the cross-sectional dependence structure in a parsimonious way.
The second part of the project will focus on developing efficient simulation schemes for MAFs. We will extend methods based on Fast Fourier techniques which are powerful in the univariate set-up, see [NV2016], and quantify the corresponding numerical approximation error.
The third part of the project consists of developing statistical inference tools for MAFs. We will compare composite-, quasi- and simulated likelihood methods in detailed simulation studies.
Finally, we will carry out various data examples: E.g. we will focus on modelling temperature and pressure over a wider region in a bivariate model based on MAFs, see e.g. [GKS2010] for a study of the corresponding forecast errors. Also, we will tailor MAFs to obtain a joint model for solar and wind power production in the European energy market.

Supervisors:
Sarah Dance (University of Reading, Department of Meteorology)
Joanne Waller (University of Reading, Department of Mathematics and Statistics)
Amos Lawless (University of Reading, Department of Mathematics and Statistics)
Nancy Nichols (University of Reading, Department of Mathematics and Statistics)
PhD project summary: Numerical weather forecasts are obtained by evolving forward the current atmospheric state using computational techniques that solve equations describing atmospheric motions and other physical processes. The current atmospheric state is estimated by a mathematical technique known as data assimilation. Data assimilation blends previous forecasts with new atmospheric observations, weighted by their respected uncertainties. The uncertainty in the observations is not well understood, and currently up to 80% of observations are not used in the assimilation because these uncertainties cannot be properly quantified and accounted for. This project will investigate mathematical methods to approximate observation uncertainty that preserve observation information content while being sufficiently efficient for practical use in operational weather prediction.
In Numerical Weather Prediction (NWP), large computational models simulate the complex nonlinear equations of motion of the atmosphere. Forecast accuracy is still constrained by uncertainties in the initial conditions, known as the analysis. Variational data assimilation techniques are often used to compute the analysis by minimizing a nonlinear cost function. This is essentially a weighted measure of the distance from forecast states (the background) and the available observations over a fixed time window, weighted by the uncertainties in the data. Thus for good results, accurate specification of the forecast and observation error distributions is vital.
It is becoming increasingly important to use observation data from remote sensing instruments (e.g., satellites and ground based-radar) providing detailed information about the current state of the atmosphere on fine scales. Although tt is well known that these data have spatially correlated errors, data assimilation algorithms have typically treated the errors as white noise. This approximation is made since the details of the correlation structure are often unknown. It also allows a simplification of the calculations and a reduction in computational cost. Unfortunately, these measures do not fully exploit the observations and significant information may be lost in the assimilation.
More recently, we have shown that it is possible to estimate observation error correlations. A proper treatment of observation error correlations results in more accurate analyses and improvements in forecast skill. However, estimates of observation error correlations are often noisy, and it is unclear the best way to regularize these to ensure speedy convergence of the iterative scheme used to minimize the cost function and preserve the maximum amount of observation information content.
This PhD project will investigate methods of covariance regularization that preserve observation information content. The first stage of the project will be to consider existing covariance regularization schemes applied to some typical noisy estimated observation error covariance matrices and compare their effects. Metrics to consider in the comparison include observation information content, analysis accuracy and minimization convergence speed. This initial work is expected to provide a basis for the development of a new regularization method or variational assimilation pre-conditioning technique. The project lies within numerical linear algebra and optimization and will consist of theoretical work supported by numerical computations in an idealized framework. There will also be the opportunity to work with real observation and model data from the Met Office and gain an understanding of practical operational issues.

Supervisor:
Pavel Berloff (Imperial College London, Department of Mathematics)
Igor Shevchenko (Imperial College London, Department of Mathematics)
Erik van Sebille (Imperial College London, Grantham Institute/Dept. Physics)
PhD project summary: Ocean currents and eddies constantly transport and stir huge amounts of water masses and their properties. An efficient way of detecting these processes is by releasing neutral floats and tracking
their trajectories. An alternative diagnostics is provided by tracking passive tracer concentrations. On the one hand, the observations show enormous complexity of the eddy-induced transport, which turns out to be not only spatially inhomogeneous and anisotropic but also significantly non-diffusive. On the other hand, general circulation models routinely approximate this transport as homogeneous and isotropic
diffusion. This dire situation ensures great potential not only for upgrading the diffusion approach, but also for developing new, physically consistent and much more accurate, simple models of material transport. The goal of this Project is to investigate material transport and stirring properties in idealized but dynamically consistent and structurally rich eddying-flow simulations, and to use these analyses for developing new generation of simple transport models based on other principles.
The Project will involve simulations of several types of idealized geostrophic turbulence.
Subsequent kinematic analyses of their transport and stirring properties will be used for developing simple stochastic and deterministic models of the transport and stirring for practical applications. The following research agenda is planned:
–- Statistical analyses of ensembles of Lagrangian particles; single- and two-particle dispersion laws,
Lagrangian autocorrelations, Lyapunov exponents, etc.;
–- Estimates of effective diffusivities, quantification of non-diffusive behaviours, transport barriers
and surf zones;
–- Inhomogeneous and anisotropic diffusive modelling of the transport;
–- Direct stochastic modelling of transport (high-order Markov models, Levy flights, fractional Brownian
motion, etc.);
–- Indirect stochastic modelling of transport by random kinematics;
–- Dynamical underpinning for kinematical transport and stirring properties

Supervisors:
Jochen Brocker (University of Reading, Department of Mathematics and Statistics)
Tobias Kuna (University of Reading, Department of Mathematics and Statistics)
PhD project summary: The term Data assimilation refers to methods whereby noisy observations of a physical system are incorporated into a dynamical model of that system, for instance a PDE, in order to reconstruct the current state of the system or even entire past orbits. Filters are a subclass of data assimilation algorithms which provide an estimate of the current state based on present and past (but not future) observations. Further, filters are often designed to work recursively, using new observations to update on previous estimates.
Although optimal filters exist in theory, they are essentially infinite dimensional in nature which renders application of these hypothetical algorithms infeasible in practice. A very large number of suboptimal but practically feasible filtering approaches have been proposed, most of them being in one way or another reminiscent of the Kalman Filter, which is the optimal filter in the case of linear systems with Gaussian perturbations. A better understanding of these filters is of utmost importance in all fields of applications such as weather forecasting, but it also leads to interesting mathematical questions related to stochastic dynamical systems and probability theory.
The analysis of filtering algorithms (optimal or suboptimal) essentially evolves around the following three core questions:
1. Is the filter stable in the sense that the internal parameters of the filter stay within the relevant range, or do they diverge to machine infinity in finite time with large probability (see[4])?
2. Is the filter stable in the sense that the initial values of certain parameters such as the initial estimate of the state become irrelevant as time progresses (see[2,3])?
3. What is the accuracy of the filter, either in absolute terms or relative to other algorithms, for instance the optimal filter (see[2,3])?
In the context of geophysically relevant models such as the Navier Stokes equations and its various relatives, all three questions have been studied in the literature.
The accuracy of certain filtering algorithms (Question 3) has also been the subject of the MPECDT MRes project ``Data assimilation in the 2--dim incompressible Navier--Stokes Equation’’, which is a predecessor to this PhD project. That project (and various publications), exploit a remarkable fact shared by many dissipative PDE’s relevant in geophysical fluid dynamics, namely that the projection of the solution onto a suitable finite dimensional space will eventually determine the whole (infinite dimensional) solution.
As far as we can see however, performance analysis has mainly focussed on the filters’ ability to estimate the current state. The potential to provide useful aposteriori error information has often been mentioned and was in fact a major driver behind the development of various filter variants, but there is not much in terms of rigorous analysis of whether this information is in fact reliable. Moreover, it seems that there is no generally accepted methodology whereby such a question could be addressed (see however [1]). The aim of this project is to further develop such a methodology and contributed to filling this gap.

Supervisors:
Matthew Piggott (Imperial College London, Earth Science & Engineering)
Colin Cotter (Imperial College London, Department of Mathematics)
Simon Funke (Simula Research Laboratory)
PhD project summary: The regularity of the tides makes them attractive for renewable power production, with the UK at the forefront of a fledgling industry seeking to realise this potential. Advanced numerical tools are required to optimise array designs to maximise power yield and minimise costs and impacts. Substantial progress on the development of such tools has been made in recent years; this project seeks to address some of the outstanding requirements for their real-world application. This includes the implementation of turbulence parameterisation methods, the inclusion of sophisticated financial cost models, and the development of individual turbine control capabilities.
The regularity of the tides makes them very attractive for renewable power production. At multiple locations around the UK the local acceleration of tidal currents provides suitable sites for the installation of large arrays of tidal turbines. However, these arrays will only be developed if they can be shown to be economically viable, feasible from engineering perspectives, and with acceptable environmental impacts. Optimisation of the array size, location and its precise configuration is essential to achieve this (Vennel et. al., 2015), and sophisticated numerical models are required to assist developers in these tasks (Funke et. al., 2014; Culley et. al., 2016).
To date our work in this area has focussed on the use of automatic code-generation techniques (Rathgeber et. al., 2016) for the numerical solution of the depth-averaged shallow water equations and, uniquely, the development of associated adjoint-based optimisation algorithms (Farrell et al., 2013; Funke et. al., 2014). A current PhD project is working towards the inclusion of uncertainty quantification techniques within both power yield estimates as well as ‘robust’ array optimisation algorithms, and another is considering the trade-off between maximising power or profit within an array design and potential environmental impacts.
The focus of this project will be to take the next steps towards the development of a so-called ‘second-generation’ design tool of value to industrial users. In particular, we will address the fact that little effort to date has gone into the parameterisation of subgrid-scale processes, including turbulence. This is of vital importance for reliable estimated of how turbines interact with one another within an array and respond to and alter the ambient environment. This effort will build upon a MRes project considering ‘horizontal large eddy simulations (HLES)’ methods, and extend to three-dimensional RANS and LES methods using the numerical capabilities developed in a parallel ongoing EPSRC project “A new simulation and optimisation platform for marine technology”.
Secondly, we will seek to incorporate improved economic models in order to investigate what impact optimising different economic properties could have on the array, and how these change based upon the priorities of the stakeholder concerned. For example, the public may just want more power generated cheaply but the investors may want to maximise return on investment or prefer a solution that minimises capital cost and therefore risk at the expense of higher generation.
Thirdly, while design of the array is important, there is still a key role for a computational design tool in the optimisation of the ‘operation’ of the array once installed. Here control theory will be utilised to investigate the control of individual turbines (e.g. dynamic control of blade pitch to alter the relationship between flow speed and power production) in order to operate them in such a way that mitigates the effect of nearby device downtime, and potentially minimises the need for maintenance through reducing structural loads. Estimates of, and reductions to, the significant costs associated with device maintenance and downtime in turn feeds back into the second topic.
Each development will be tested on an existing suite of idealised coastal ocean configurations, as well as realistic sites with arrays currently under development.

Supervisors:
Joanna Haigh (Imperial College London, Grantham Institute/Dept. Physics)
Valerio Lucarini (University of Reading, Department of Mathematics and Statistics)
PhD project summary: In his landmark book “The Theory of Heat Radiation” Planck (1913) showed that associated with any flux of electromagnetic radiation there is a flux of entropy and, through using an inversion of the black body function for temperature as a function of intensity, the spectrum of radiant entropy can be deduced from the spectrum of radiant energy.
In a climatic steady state the global annual average solar energy absorbed by the Earth is exactly balanced by an equal emission to space of heat energy. It is well established, however, that the entropy flux associated with the emitted radiation is far larger than that of the absorbed solar radiation, with the balance being created by non-equilibrium processes within the climate system.
Different states of the climate are thus reflected in different fluxes of entropy and in this project we investigate the impact on the entropy budget of factors that produce radiative perturbations to the climate, for example changes to the concentration of atmospheric carbon dioxide or to solar irradiance.
To date there has been very little work published on spectral entropy fluxes, and some that has is misleading or of low accuracy. In this project we will use state-of-the-art computer models of atmospheric radiative transfer to calculate the radiation and entropy spectra for various climate change scenarios and thus deduce the entropy production required of the Earth.
We will complement the radiative point of view on entropy production with that based on the analysis of irreversible processes inside the geophysical fluids.
We shall look at the impact of performing spatial and temporal coarse graining in the climatic fields resulting from general circulation models as well as in the spectral resolution of the radiative models. This will inform us on the degree of convergence of the simulated properties with resolution.
We expect to use the output of GCMs available through the PCMDI as well as high resolution runs available in Reading. We also expect to use data resulting from radiative transfer models.
An additional approach will be to calculate the entropy fluxes associated with radiation spectra measured from satellites, or calculated within global climate models.

Supervisors:
Colin Cotter (Imperial College London, Department of Mathematics)
Ben Shipway (UK Met Office)
Jemma Shipton (Imperial College London, Department of Mathematics)
PhD project summary: In numerical weather forecasts and climate models, the resolved fluid processes are called the 'dynamics', and are separated from the unresolved processes (known as the 'physics'). To reflect the behaviour of the Earth System, these processes communicate to one another through a process called 'coupling'. Currently the UK Met Office is developing a new numerical discretisation for its dynamical core, known as Gung Ho!. This will use a finite element method, which presents a host of issues for the coupling of the new dynamical core with the physics processes. For my project, I will be exploring some of these issues and attempting to find solutions to them.

Supervisors:
David Ham (Imperial College London, Department of Mathematics)
Colin Cotter (Imperial College London, Department of Mathematics)
Thomas Melvin (MET Office)
PhD project summary: Recently developed numerical approximations of the equations of motion for the atmosphere, based on the framework of compatible finite element methods, has been shown to maintain exact force balances which is essential for generating accurate solutions. These approximations allows use to be made of massively parallel supercomputers, and thus are being used in the Gung Ho project to make a new dynamical core for the Met Office and the UK climate science and meteorology community.
This project focuses on the design and analysis of new discretization methods within the compatible finite elements framework, and their implementation in software through the Firedrake project. Current investigations include, but are not limited to:
• Multigrid algorithms for the implicit Helmholtz-type operator for compressible flows, and their analysis and implementation;
• Design analysis of stability and accuracy, and implementation of transport schemes for density, temperature and velocity;
• Development of higher order discretizations and their efficient implementation via sum-factorization or Bernstein polynomial techniques;
• High performance Bernstein embedded DG transport schemes;
• Design, analysis and implementation of efficient solvers for the Helmholtz-type operator in the case of spectral element or Bernstein bases;
• High performance implementation of higher order finite element spaces using automated code generation.

Supervisors:
Ari Laptev (Imperial College London, Department of Mathematics)
Michael Levitin (University of Reading, Department of Mathematics and Statistics)
In the formulation of the two dimensional Shallow Water equation within a Rotating frame (RSW) the geometry of the problem appears to be substantially changed compared to the same problem settled in the irrotational case. As recognized in the literature, the rotating term acts as a breaking up term for the symmetry of the system so that the resulting system consists in a hyperbolic, non symmetric system of equations. From a spectral point of view the broken symmetry gives rise to a non self-adjoint operator for the linearised problem whose spectrum contains now a purely complex component, which in turn generates in the time dependent context phenomena such as blow-up or formation of turbulences. It is well known that the latter is a feature which appears in many different weather and climate related scenarios. Turbulences are indeed essential to study the cyclone's pattern formation as well as in the context of unpredictability and chaotic systems with particular relevance to numerical simulations. This wide collection of problems arising in different contexts can be allocated under the same very challenging mathematical problem, namely the study of resonances for operators which in case of complex perturbations turns out to be genuinely related to the eigenvalues of a non self-adjoint operator. Recently, new results for two-dimensional complex non self-adjoint Dirac like operators have been achieved in terms of partial description of the complex part of the spectrum and specifically regarding the arrangement of the complex eigenvalues. This PhD project aims to gain knowledge on the solutions of the RSW equations from the study of the linearised system. In order to do so, I will exploit the mathematical techniques that have been used in the study of the spectral analysis of non self-adjoint operators which present similar features to the one arising in the RSW framework. In particular I will be interested in the characterisation of the spectrum and the study of its stability under small perturbations as well as the formation of resonance effects. On the mathematical side, the latter phenomenon turns out to be very challenging and of high interest as pointed out within the KAM theory since it introduces small divisors in the approximating series for the solution of the linearised system, increasing hugely the difficulty of deriving estimates for the series. Furthermore, on the physical side, the linearised shallow water equation system has been recognised as a valid approximation for modelling the propagation of a tsunami wave far from the shore. So the study of resonances of the operator in the linear case itself represents again an interesting topic which can lead to improve the knowledge of the behaviour of such type of waves.

Supervisors:
Axel Gandy (Imperial College London, Department of Mathematics)
Ralf Toumi (Imperial College London, Department of Physics)
Catastrophe models are essential tools for insurance and reinsurance companies. These models provide support to (re)-insurers calculating an adequate level of capital to cover an extreme meteoro-logical event, a seismic event etc. Catastrophe models are invaluable tools in actuarial science when conducting risk analysis for insurance purposes and are crucial to the managing catastrophe risk expo-sure. The output from these models is used by insurers and re-insurers to make premium and reserve calculations. The reserves set aside are imperative for the reconstruction of housing and infrastructure damaged and destroyed by such extreme events. Due to the power and sophisticated nature of these models, they are very expensive for insurers to lease. One of the inputs to such models is the historical observations of the event, for example historical tropical cyclones. However, since we only have one observable earth, we only have a ﬁnite set of reliable observations. This directly inﬂuences the level of conﬁdence we have on decisions made based on this data. It is therefore necessary to supplement the historical observations and include synthetic data to improve the level of conﬁdence on the output of catastrophe models. There are models in operation that can achieve this task already, namely Global Climate Models (GCMs). However, depending on the temporal and spatial resolution of the GCMs, they are computationally expensive and time consuming to run. There is also no guarantee that a GCM run will produce the desired event.
The primary focus of this project is to develop a stochastic model for the genesis, propagation and termination of tropical cyclones. Stochastic models can be run with basic computing power and can produce signiﬁcantly more events in a much shorter time scale. Stochastic models are good at replicating summary statistics of historical tropical cyclones [Leahy, 2015], however in order to quantify possible future scenarios, the stochastic model will have to depend on some physical covariates, for example the sea surface temperature. Reanalysis data is readily available on may physical variables through the ECMWF. One of the aims of this project is to predict the genesis (frequency and location) of tropical cyclones in the future.
To analyse the important factors in the genesis of tropical cyclones, exploratory data analysis and computational statistical methods. Geostatistics and spatial point processes will play an important role in the simulation of synthetic genesis locations. For propagation and termination, research into the appropriate statistical techniques is required.
In past literature, the development of stochastic models for the generation of synthetic tropical cyclones has relied upon the assumption that simulated cyclones will have the same behaviour as historical tropical cyclones [Vickery et al., 2000], [Emanuel et al., 2006] and [Hall and Jewson, 2007]. This boot-strapping like method is justiﬁed if one requires tropical cyclones with the same behaviour, however if one wishes to determine what will happen in the future, this assumption is clearly inade-quate. In a changing climate with several possible future scenarios, one requires a model that can by ﬂexible enough to account for possible future climatic scenarios.
The initial basin of interest is the South East Asia and the western-North Paciﬁc ocean. An objective of this project is develop a model that can be applied in diﬀerent basins with the intention of ultimately developing a global model.
For model validation, we will compare the model to the state of the art and observational data.

Supervisor:
Tristan Pryer (University of Reading, Department of Mathematics & Statistics)
The goal of this project is to design numerical schemes that preserve certain quantities of interest. A natural question to ask is how many can be embedded in the scheme at once and what effect does this have on the accuracy of the dynamics of the problem.
The three main difficulties in the design of such schemes that will be tackled in this project. They are:
1. Design of appropriate timestepping schemes.
2. Design of appropriate spatial discretisations on planar domains.
3. Extension of these schemes to physically relevant models.

Supervisors:
Dan Crisan (Imperial College London, Department of Mathematics),
Mirabelle Muûls (Imperial College London, Business School and Grantham Institute),
Jean-Francois Chassagneux (Université Paris Diderot)
As a response to the risk of Climate Change, carbon markets are currently being implemented in several regions worldwide. Since 2005, the European Union (EU) has had its own Emissions Trading System (ETS), which today is the largest such market. In September 2015, it was announced that China (whose carbon emissions make up approximately one quarter of the global total) will introduce a national emissions trading scheme in 2017. When it comes into effect, China’s market will be the largest of its kind, overtaking the EU ETS. At that point, some 40% of global emissions will be covered by cap-and-trade schemes. According to the World Bank, the world’s emissions trading schemes are currently valued at about $30 billion. However, scientific, and particularly mathematical, studies of these carbon markets are needed in order to expose their advantages and shortcomings, as well as allow their most efficient implementation.
In this project, we will consider a mathematical model for the pricing of emissions permits. The model has particular applicability to the trading of European Union Allowances (EUAs) in the EU ETS but could also be used to model other cap-and-trade schemes. We will investigate mathematical properties such as the existence & uniqueness of solutions for the pricing problem, stability of solutions (e.g. the sensitivity of prices under small perturbations to the market) and their behaviour, as well as computational algorithms for solving pricing problems. The model can be extended to account for multiple trading periods.
The pricing problem that arises from the model fits into the theory of fully coupled forward-backward stochastic differential equations (FBSDEs). The pricing FBSDE is non-standard in several aspects. Firstly, the terminal condition of the backward equation is given by a discontinuous function of the terminal value of the state driven by the forward equation. This is in contrast to the majority of FBSDE literature in which the terminal condition is assumed to be Lipschitz continuous, but appears naturally when considering a market for emissions allowances. Secondly, the forward dynamics may not be strongly elliptic, not even in the neighbourhood of the singularities of the terminal condition.
The project will involve three work packages. Firstly, the project will consider a qualitative analysis of the class of FBSDEs introduced above. As demonstrated in the literature already, this may require one to relax the notion of solution to a FBSDE. Secondly, numerical schemes applicable to the pricing problem will be investigated. The pricing problem is high-dimensional (4-8 dimensions) and a robust numerical scheme will be required to produce satisfactory numerical results, particularly if the multi-period model is to be considered. Thirdly, a case study of the UK energy market will be considered. This will involve calibrating the model’s parameters and processes to real data.
With action on climate change being strongly needed, and the implementation of carbon markets a promising policy option, this research will not only innovate on the theoretical side, but also has a high policy and economic impact potential.
Click on the link below to view Hinesh delivering his pitch at the Imperial’s 2016 3MT competition:
https://www.youtube.com/watch?v=0HHWqI_rQ3I

Supervisors:
Thomas Frame (University of Reading, Department of Meteorology)
John Methven (University of Reading, Department of Meteorology)
Jochen Broecker (University of Reading, Department of Mathematics & Statistics)
Nigel Roberts (UK Met Office)
Probabilistic forecasting is common in many fields such as financial trading, bookmaking and Numerical Weather Prediction (NWP). Amongst these applications NWP is unique as it is the only one in which the future events in question are governed by physical laws. Probabilistic forecasting is necessary in NWP because: i) the physical laws are chaotic with the result that initial uncertainties grow rapidly with time, and ii) the physical laws are approximated using numerical methods in computer models. Sensitivity to initial conditions means that many different outcomes are possible, given our current state of knowledge, and ensembles of forecasts are used to capture the range of different outcomes. Truncation of spatial and temporal scales in numerical models, essential for implementation, introduces uncertainty associated with “sub-grid scale” motions. Very recently probabilistic forecast generated using numerical models at high 1-2km scale resolutions have started to be produced. Such ensembles can only feasibly be run in local area domains using a coarser resolution global model to provide the necessary lateral boundary conditions. In this set up the information about scales resolved by the high resolution local area model that are below the truncation of the coarser resolution global model is constantly being generated by the high resolution model dynamics as the lateral boundary conditions update. For this information to be times specific and lined to the observed initial state of the atmosphere, requires dynamical processes at the fine scale to be strongly controlled by the larger observationally constrained scales. This project aims to identify the predominant dynamical causes of such information transfer and to quantify the resultant information gain obtained from running high resolution ensemble forecasts.

Supervisor:
Ben Calderhead (Imperial College London, Department of Mathematics)
The evidence base for anthropogenic climate change is now unequivocal, and efforts in a multitude of fields are needed to address this urgent challenge. Researchers investigating technologies such as CO2 capture and sequestration, wind power farms or biofuel produced from algae all have to deal with the problem of uncertainty quantification which is inherent to applied sciences.
The large-scale, near-term deployment of CO2 capture and sequestration (CCS) technology is crucial when the production of CO2 cannot be avoided. One of the key challenges associated with this technology is its cost – this will need to be appreciably reduced in order to make it more attractive to the traditionally low-margin power generation industry. In the area of process engineering, reliable data describing thermophysical properties of materials are of paramount importance in every aspect of process design, operation and control, both in academia and industry. To this end, there is a vast activity regarding the measurement, modelling and correlation of these data. There are a number of different measurement techniques in common use, which results in these important data have significant uncertainty.
Wind power farms are a well established source of green energy but still a thriving field of research when it comes to the efficiency regarding their design and installation. To determine possible locations for the installation of wind farms as well as for the installation itself borehole measurements are used to investigate the soil on which windmills are built. Conclusions regarding the condition of the soil in between these boreholes come with significant uncertainty.
The aim of this project is to extract useful information from these noisy or intermittent data sets that can be used to dramatically improve efficiency and reduce the overall cost of deployment of these technologies.

Supervisors:
Nikolas Kantas (Imperial College London, Department of Mathematics),
Helen Brindley (Imperial College London, Department of Physics),
Dan Crisan (Imperial College London, Department of Mathematics)
Modelling climate dynamics such as ocean circulation or wind turbulence are very difficult research questions with large economic and societal impacts. Quantifying uncertainty in climate prediction and estimating the potential strength of extreme meteorological events is of great importance, even more in the face of global warming. The objective is to provide estimates for spatial and temporal varying phenomena based on a sequence of observations. This task is commonly referred to as Data Assimilation and is of great importance for numerical weather prediction. So far most methods used for this task are based on heuristics, and although they can be tuned to provide fairly accurate estimates it has been not possible to provide any characterisation of the underlying uncertainty of these estimates. Within the scientific communities working in Probability and Statistics, problems like Data Assimilation are usually formulated as high dimensional non-linear filtering problems. Thereby, one aims to compute the probability distribution of a latent Markov process conditional on the collected observations. These so called filtering distributions can provide detailed information about the uncertainty of estimates of the latent process, but are typically intractable and one needs to rely on numerical approximations.
The current state of the art of numerical methods are particle filters also known as Sequential Monte Carlo, which are principled simulation based methods based on successive selection and sampling steps. These are principled methods as they are based on solid theoretical principles, are provably consistent and one could achieve an arbitrary degree of accuracy with enough computational effort. Particle filtering has been extremely successful in many application domains in engineering, economics, systems biology etc., but its success for problems related to Data Assimilation has been rather limited due to the curse of dimensionality. As a result, standard particle methods require an enormous or un-realistic amount of computation to achieve reasonable results. This project aims to assess the potential applicability of particle approximations to challenging problems in Data Assimilation. Early results for filtering problems with the Navier-Stokes equations indicate this is possible when using more sophisticated versions of particle filters that include tempering or carefully crafted Markov Chain Monte Carlo steps.
The aim is to develop advanced particle filters, which also utilise efficiently information from the observations and the model structure. In addition, emphasis will be given to using highly parallelised implementations on appropriate hardware platforms such as Graphics Processor Units (GPU) and computing clusters, which could provide massive computational speed-ups. For the modelling, there are many choices available and one could include various spatial and temporal varying processes from Geophysical Fluid Dynamics. The data sets containing observations of these phenomena can be either synthesised from more sophisticated models or real observations provided by weather forecasting centres.
The methodology developed in the project will be applied to tracking dust storms in the Saharan dessert. Saharan dust storms have a strong impact on the Earth's energy budget. Infrared imagery can be used to highlight their presence and track their movement through space and time. However, the methods used thus far are either subjective, relying on an expert observer, or rather simplistic, using fixed thresholds and taking no account of the temporal element of storm behaviour. In this project the aim will be to develop an analytical approach to identifying and tracking dust storms as identified by the Spinning Enhanced Visible and InfraRed Imager (SEVIRI). If successful it is envisaged that the method will be used to pinpoint dust source locations and their frequency of activation.

Supervisors:
Tobias Kuna (University of Reading, Department of Mathematics and Statistics)
Valerie Livina (NPL)
Understanding components of a dynamical system, such as fluctuations and natural/anthropogenic changes in the Earth challenging scientific task. The scale of geophysical monitoring required, the changes due to the development of instrumentation, and the most appropriate analysis methods are the current task under debate. The challenges in studying such time series include being able to handle a large dynamic range, non-Gaussian distributions at various frequency bands, the presence of high-amplitude transients, and processing of large data sets. The uncertainties associated with the trend estimates must account for the nature of the fluctuations as well as the choice of variables, the model, or metrics. In order to understand natural- and anthropogenic-driven climate change, we plan to study Essential Climate Variables (ECVs), in particular, land surface humidity and sea-surface temperature (SST). The goals of this project are to 1) develop a methodology for studying gradual and abrupt changes in geophysical data, by testing and refining previously developed techniques 2) compare these techniques with classical parametric inference approaches for stochastic processes and evaluate the model error by considering higher dimensional models; and 3) determine whether such abrupt changes in the variables under consideration are underway.

Supervisors:
Ben Calderhead (Imperial College London, Department of Mathematics)
Matthew Piggott (Imperial College London, Department of Earth Science & Engineering)
David Ham (Imperial College London, Department of Mathematics)
Simon Funke (Simula Research Laboratory, Oslo)
The regularity of the tides makes them an attractive prospect for clean and secure renewable power production. At multiple locations around the UK the local acceleration of tidal currents makes them potentially suitable sites for the installation of large arrays of tidal turbines. However, these arrays will only be developed if they can be shown to be economically viable. Optimisation of the array size, location and its precise configuration is essential to achieve this, and sophisticated numerical models are required to assist developers. Work in this area to date has focussed primarily on the solution of the shallow water equations, and relatively simple optimisation algorithms. This PhD project will seek to extend the state-of-the-art in optimal turbine array design by using advanced gradient-based optimisation methods, with key novelties coming from the presence of three-dimensional (turbulent) dynamics, the rigorous treatment of system uncertainties, and subgrid-scale turbulent closures.
Three-dimensional effects will require the use of scalable, efficient solution procedures for the Navier-Stokes equations, and here will be based upon code generation techniques developed at Imperial and which are designed to take advantage of the high aspect ratio of our target application to deliver highly efficient discretisation techniques. Part of the project will involve investigation of parameterisation methods to represent tidal turbines (e.g. as momentum sinks, and turbulent sources) within the model. Further to this, both RANS and LES based turbulence closures can and will be considered. Given the scales of the problem and the fact that an inverse energy cascade applies to the depth-averaged dynamics, the use of LES techniques in the horizontal supplemented with RANS models for fully-three-dimensional turbulent dynamics (e.g. providing back-scattering effects) is an attractive and viable large-scale approach that could be considered. The adjoint capability which is key to the optimisation component of this work can also be used to help calibrate these turbulence models to best represent turbine wakes, and thus accurately represent the interactions between turbines within an array. It is clearly crucial to get this right when attempting to optimise the relative locations of turbines.
Uncertainties in the system include imperfect knowledge of bathymetries, the impact of atmospheric conditions of tidal currents, background turbulence characteristics in the water column, and potentially over longer time scales the impact of turbine degradation on array performance. A range of mathematical techniques to deal with these uncertainties will be considered, including Monte-Carlo based or sampling methods, adjoint sensitivity based methods, and both non-intrusive and intrusive versions of polynomial chaos expansions. The latter requires the construction of new solvers for different equation sets governing the coefficients of the polynomial chaos expansion. Historically the time taken to develop new numerical models has been an impediment to this approach. However, here the use of code generation techniques to rapidly develop new models will be considered.
The emphasis of this project will lie on the part on uncertainty.
Click on the link below to view Tobias delivering his pitch at the Imperial’s 2016 3MT competition:
https://www.youtube.com/watch?v=V2xskpw7iDg