Skip to 0 minutes and 8 seconds I’m Friederike Otto, and I’m currently the Deputy Director of the Environmental Change Institute here at Oxford University. The Environmental Change Institute works on understanding environmental change, and then finding solutions for problems arising from environmental change. I’m a physicist by training. So I did my master’s in physics, and then I did a PhD in philosophy. And now, I’m basically a climate scientist working on extreme weather events, mainly. Which is also why we need very large data because extreme weather events are rare by definition. And so we need big data to be able to analyse statistics of rare events. To really try and explore what are the extreme events, and what are the possible extreme events, we need climate models.
Skip to 0 minutes and 59 seconds We only see one type of weather every day, so we basically see just one realisation of what is possible under the current conditions. But we want to know what does a one-in-a-hundred-year event look like? What could be a one in 1,000 year event? And to do that, we need to run our climate models thousands of times. And if you would want to do that on the supercomputer, it will be prohibitively expensive. It would not be possible. But with climateprediction.net, we don’t run the climate model on a supercomputer, but we run our climate model on volunteer’s private computers.
Skip to 1 minute and 34 seconds And we have 30,000 nodes on our global network of supercomputers which allows us to run, very quickly, large ensembles of climate and weather simulations. The way that science works, or that also we work, is that usually we have a science question that we want to address. So be it that we want to know, OK, there have been these massive floods and record-breaking rainfalls in the southern UK in 2014. And so we wanted to know, OK, what is the likelihood of this kind of rainfall, and has it changed to anthropogenic climate change. So we had to design an experiment. Once we have done the experimental design, we tested, and when it runs, then we send it out to our volunteers.
Skip to 2 minutes and 22 seconds And when we run an experiment like this, we get 1.5 terabytes a day data back. Thanks to our volunteers, we have basically unlimited computing power to sit and run the simulations. But then we have to store the data somewhere. For this experiment that we did for analysing the floods in the southern UK, we had 50 terabytes of data that we then analysed and need to keep, now, because we have written scientific papers with it, and other people need to be able to go back to the data and re-do our experiments and our analysis. That’s where big data becomes, sometimes, a big problem. So currently, we are running one of the Met Office models.
Skip to 3 minutes and 5 seconds It is a fairly ancient model that has been developed in the 1990s, but we have improved the physics and the resolution so it’s comparable to state-of-the-art climate models in terms of how it simulates the climate and the weather. But it would be very nice to be able to run a different climate model developed by a completely different team using different assumptions of all the things that we cannot resolve in a climate model, so that we have a even better ability to explore the space of possible weather.
Skip to 3 minutes and 39 seconds I think, in terms of the science as a whole and where I am and my team are particularly interested in, is going from the purely meteorological event, which is what we are currently doing, to the real impacts. So not just look at rainfall, but look at, OK, how does this rainfall actually translate into flooding. And then questions of what’s the catchment size, and how is the catchment managed, play a much more important role than, maybe, the role of anthropogenic climate change. So really, the question how does the meteorological event and the role of climate change in that translate into something, really on the ground, that is actually affecting people.
Skip to 4 minutes and 18 seconds We have a small group of very active volunteers who are connected to our development site where we test new experiments and new model setups. They then tell us, OK, it ran four or five minutes a then crashed. Or it actually completed a whole year and then crashed, and that then helps us to find out, OK, what went wrong and why. And so there is this element of, really, citizen science of helping us to diagnose errors. But the vast majority of all of our volunteers just donate their computing time and let the model run while they’re not doing much with their computer.
Get involved with climate science
Computer models can simulate the climate, producing predictions about temperature, rainfall and the probability of extreme weather events.
The more models that are run, the more evidence we gather on climate change, but as Jon mentioned in Week 1, this requires a huge amount of costly computational power. Watch Dr Friederike Otto, Deputy Director of the Environmental Change Institute, outline a solution to this challenge, Climateprediction.net; a volunteer computing, climate modelling project. ClimatePrediction.net runs climate models on people’s home computers to help answer questions about how climate change is affecting our world, now and in the future.
In the video Fredi notes that thanks to the volunteers there is a lot of computing power available to run the climate simulations, but the output data also needs to be stored. Fredi highlights the need to keep these data because they have been used to produce scientific papers and are therefore part of the scientific record. This need to keep data was also mentioned by Victoria Bennett in Steps 1.8 and 1.9, who echoed the point that CEDA data were not deleted.
If you’re interested in the scientific community’s ability to analyse, predict and communicate the possible influence of climate change on extreme-weather events, take a look at World Weather Attribution, a project from the Environmental Change Institute.
© University of Reading and Institute for Environmental Analytics