Visual representation of data has many benefits. Professor Min Chen highlights the advantages and disadvantages of visualisations in this article.
We’ve talked about the sources of big data and in Step 1.4
we touched on the needs of weather forecasting, the minute-by-minute big data needs of gathering data and producing forecasts. A technique which is fundamental to weather forecasting, but is becoming widespread in many areas of environmental modelling – is data assimilation.
Data assimilation improves predictions from large and complex forecasting models, by combining uncertain model predictions with a diverse set of observational data in a dynamic feedback loop. Data assimilation is a formidable big data problem. We need to integrate massive datasets efficiently and accurately, which is limited by computing power. As the animation describes, the aim of data assimilation is to combine real-world observations with a numerical model, which can’t be done in one step but requires iterations of small steps. The more powerful the computer, the faster the steps can be completed and the more complete the convergence of observations and model data. For routine weather forecasts, several times per day, supercomputers are required both to assimilate around 10 million observations and run a new forecast for the coming days; the data assimilation and forecast run, take just a couple of hours.
Find out more about a University of Reading project, DARE
, which will produce a step-change in the skill of forecasts of urban natural hazards such as floods, snow, ice and heat stress. For a broader perspective on data assimilation research and training opportunities visit the Data Assimilation Research Centre (DARC) pages