Want to keep learning?

This content is taken from the Coventry University's online course, Researching Risk, Disasters and Emergencies. Join the course to learn more.

Skip to 0 minutes and 12 seconds Hi, my name is Matthew Blackett and I am a reader here at Coventry University in Geography and Natural Hazards. So my background is looking at volcanoes mainly, but one research idea I had after reading a lot of the the papers on geology and the hazards associated with geology was that people were claiming that they could predict earthquakes. And lots of work had been done, lots of investment had been made, but nobody could categorically prove that they were able to predict earthquakes.

Skip to 0 minutes and 39 seconds So one of my ideas, really, was to go out there and look at the data that people had used and undertake a a much more robust analysis of the data, using complete datasets rather than shorter datasets, which people had previously been using. So, sticking with the satellite theme, which is is my area of interest, I was able to download about 10 years’ worth of land surface temperature data for different locations around the world. And it’s land surface temperature that people had claimed increased prior to an earthquake. So my aim was to have a full, comprehensive set of data, and evaluate these claims that people have made.

Skip to 1 minute and 16 seconds So when you’re looking at quantitative data, numbers basically, in terms of research, it really is important that you source your data from a really reputable source. My data came largely from NASA, which you would argue is a quite a reputable source of data. And it’s really important, because, if you don’t source your data from a reputable source, it effectively means that all of the research, all of the processing of that data or the analysis, is worth little, because what you start with perhaps isn’t very good quality, and that low quality of data will feed through into your research. Another thing that really researchers need to to understand and to be aware of is quantities of data.

Skip to 1 minute and 57 seconds If you have 10 years’ worth of data, and it takes you a longer time to process than seven, it’s probably worth doing that, because the more data that you have, the more credible your results are going to be. So, in the past lots of people had looked at this land surface temperature data with the aim of determining whether anomalies could be found in it prior to an earthquake, and a lot of these methods were kind of unsatisfactory, really. They used satellite imagery that still had cloud in it, for example, and clouds are quite cold compared to the temperature of the land surface.

Skip to 2 minutes and 33 seconds So the new techniques that I applied actually removed the cloud cover, so that we had a an accurate representation of the land surface temperature, as opposed to the pollution of the cloud cover. And I think that sort of thing is pretty important in any sort of research; you have to understand the data that you have, understand it’s not perfect, and try and remove as many of those imperfections from the data as you can, to ensure that what it is that you’re left with is accurate data, as opposed to data that might well lead to spurious findings later on.

Skip to 3 minutes and 6 seconds So as a result of my findings I was able to show that lots of the claims that people have put forward in the past, that you could predict earthquakes from looking at land surface temperature, I was able to show that that’s not really a reliable technique. They’d used too few years of data, or they hadn’t been comprehensive enough in the analysis that they’re undertaken. But

Skip to 3 minutes and 29 seconds using my 10 years worth of data, I was able to say: “Well, perhaps there is a heating up of the ground surface before an earthquake, but the land surface heats up equally when there’s not an earthquake, so really we can’t use that as a basis for earthquake prediction”, and hopefully as a result of this, funding can now be targeted into what I would call more credible geological exploration methods, methods that perhaps will lead to the prediction of earthquakes, as opposed to these less credible, less justifiable techniques.

Case study one: quantitative research

Watch the video in which Dr Matthew Blackett describes his approach to research.

Mathew’s work is focused in the area of physical geography and natural hazards. Specifically he researches approaches to monitoring geohazards such as volcanoes and earthquakes. His approach to this is through the use of remote sensing technologies, making use of NASA and ESA satellites.

He uses statistical techniques on data collected from land surface temperature (LST) to find anomalies that appear prior to large earthquakes in an effort to develop predictive methods from such approaches.

In his research he compared the LST of a region centred around the Gujarat earth quake of 2001. To search for these anomalies there is a requirement to first calculate a baseline on which to compare any potential anomalies. This was not a simplistic matter and he chose three different approaches.

  • LST difference over two years — this approach creates a spatially averaged mean of the chosen region over a number of weeks and compares it to equivalent weeks in a different year.
  • Extended LST difference — this is the same approach but the baseline average is taken from multiple years
  • Robust Satellite technique (RST) — this more complex technique compares each pixel in a region to both the temporal mean over a number of years and the spatial mean for that region. (Filzzola, 2004)

Using these three methods he was able to show potentially different outcomes. The first two indicated that variations were within the range of normal variability. The third method however did appear to show significant variation. Further investigation however, attributed these anomalies to gaps within the LST dataset, cloud cover and mosaic effects from neighbouring pixels.

You task

Matthews work used three different techniques for creating averages for comparison.

Can you identify the differences between these approaches and why they may come up with different results?

Do you think it is important to publish detailed descriptions of your methodology alongside any results that you obtain?


Filizzola, C., Pergola, N., Piertraposa, C., and Tramutoli, V. ( 2004), Robust satellite techniques for seismically active areas monitoring: A sensitivity analysis on September 7, 1999 Athens’s earthquake, Phys. Chem. Earth, 29, 517– 527.

Share this video:

This video is from the free online course:

Researching Risk, Disasters and Emergencies

Coventry University