Climate and Weather – Can we now detect climate change in the daily weather?
The average temperatures on earth are rising. There must no longer be any dispute about this, as satellite measurements have shown us too clearly for many years. The increase in atmospheric CO2 concentration by one percent per year is also undisputed. This corresponds to 36 gigatons the global industrial society emits. A connection between the two is obvious: it corresponds both to paleontological time series as well as to the most common (and simplest) climate models, which are popularly summarized under the term „greenhouse effect“. The latter is about the warming effect of the Earth due to the fact that sunlight (electromagnetic radiation between approx. 400 and 750 nanometres in wavelength) is transmitted by the atmosphere, but the infrared radiation radiated back by the Earth is blocked from being reflected back into space by CO2 (and in smaller quantities by other gases such as methane, CH4, nitrous oxide, N2O, and ozone, O3). The „natural greenhouse effect“, i.e. the effect before the industrial emission of CO2 into the atmosphere began, ensures that the temperature on Earth, standing at an average of 15 degrees, is 33 degrees higher than it would be without the atmosphere based on a simple physical equilibrium of radiation (Stefan Boltzmann Law: Every body emits radiation – the warmer it is, the more. The Earth therefore has exactly the temperature at which the incident solar radiation and the thermal radiation emitted by the Earth balance each other out). The man-made greenhouse effect caused by CO2 emission makes up „only“ about 1.2% of the total terrestrial greenhouse effect, other emitted greenhouse gases add additional 1%. This sounds like little, but it corresponds almost exactly to the measured 0.7-degree increase compared to the pre-industrial level (2.2% of 33 degrees).
So far so good: Unfortunately, this simple climate model is just that: (too) simple. In fact, there are many other components in the dynamics of the global climate that could counteract the „obvious“ explanation:
- the variations in solar activity and solar wind strength
- Clouds (water vapour is the most important greenhouse gas, far more powerful than CO2, but its quantity is hardly directly influenced by humans, unlike CO2)
- the gas exchange between ocean and atmosphere,
- ocean currents (North Atlantic Oscillation, El Nino-Southern Oscillation),
- the proportion of ice on the earth’s surface (the ice albedo effect consists of the thawing of glaciers or melting of the polar caps reducing the earth’s ability to reflect back radiation, the so-called albedo, which warms the Earth even more)
- Volcanic eruptions (with strong gas releases),
To make matters worse, the climate system is accompanied by important feedback mechanisms and self-reinforcing processes, i.e. there are usually no simple (linear) relationships in it (one example is the above ice-albedo feedback, but also effects like that with the global temperature increase, on the one hand the water vapour content in the atmosphere increases, and on the other hand the frozen methane phase in the Arctic is released, both of which intensify the greenhouse effect and cause temperatures to rise even more; it is such effects that give rise to the fear of the so-called „tipping points“). Significant parts of these relationships are not yet sufficiently understood by climate researchers. In our planet’s past, for example, there were indeed repeated climate changes in which CO2 was only one of many influencing factors and in some cases even had hardly any causal effect. A simple (linear) relationship between atmospheric greenhouse content and temperature does not seem to exist (although the relationship has fairly well established itself empirically over the last 150 years). We thus need even better models to understand the global climate and the politically so hotly debated climate change.
Let’s take a closer look at these models. There are three basic types of them:
- Simple „conceptual climate models“ for basic studies of the climate system (like the greenhouse model).
- Earth System Models of Intermediate Complexity (EMIC) for the (rather heuristic) study of climate change over longer periods in the past or the prediction of long-term climate change in the future.
- The „complex models“ for global circulation (General Circulation Models, GCM, or „Earth System Models“, ESM). These are further developments of numerical weather forecasting and correspond to mathematical models of circulation in the planetary atmosphere and/or the oceans. They use the Navier-Stokes equations of general fluid dynamics on a rotating sphere with corresponding thermodynamic variables for different energy sources (such as radiation, latent heat). These very complex equations are then solved by numerical computer algorithms on a grid covering the Earth, which requires the most powerful supercomputers available. An important determinant of the GCM and ESM models is the size of their grids, i.e. their spatial and temporal resolution. In today’s best models the former stands at around 50-100 km.
The climate models and their parameters are run over historical measurement data from about 1850 to the present. However, these models are not necessarily „fit“ to correctly predict future temperatures or rainfall. They have to show their robustness and quality „out of sample“ (i.e. with data that they did not get to know during their calibration).
The most powerful and meaningful climate models today are GCM or ESM models, which link oceanic and atmospheric dynamics. In order to standardize their many different versions and to be able to compare their results better, the „Coupled Model Intercomparison Project (CMIP)“ was founded about 25 years ago. This is an international collaboration established by the Working Group on Coupled Modelling (WGCM) of the World Climate Research Programme (WCRP). The CMIP models are developed in phases, currently the sixth phase („CMIP6“) is in progress (in its last report of 2014, the „AR5“, the Intergovernmental Panel on Climate Change (IPCC) relies heavily on its fifth phase, CMIP5).
However, even the latest climate models are still far from perfect. Climate researchers have made rapid progress in recent years with the help of increasingly powerful computers, but the spatial resolution of the CMIP5 models, for example, is still insufficient to take into account local conditions such as mountains, shore conditions, rivers, local cloud formation, etc. In order to capture these, the researchers approximate the events within a grid cell with appropriate parameterizations, which of course leaves plenty of room for uncertainties. Furthermore, global cloud formation and the interaction of the atmosphere with the oceans are not yet sufficiently well understood and incorporated into the models. There is also a fundamental problem: in order to test their models, researchers need long time series of climate data (temperature, humidity, etc.), including data that the model does not yet know after it has been calibrated. And finally, in order to assess the quality of the model, its forecasts must be compared with the events that actually occur. Thus, it can take years and decades for a model to be considered reliable (or unreliable), time that we may not have due to the rapidly changing climate.
The most important question that climate models have to answer (besides future projections) is that of the causes of climate change. Are industrial greenhouse gases such as CO2 or CH4 responsible, or are there perhaps natural fluctuations in our climate that have other reasons that have little to do with greenhouse gases (e.g. due to a variability in solar radiation that has not yet been modelled)? This question lies at the heart of the political debate and all future climate discussions. To answer it, climate models contain „external parameters“ that explicitly describe the influence of the gases we emit. These are then compared with the results of models without external parameters, i.e. only with the natural factors not influenced by human activity. The crucial question is therefore whether the models with external parameters can describe the actual development of our climate significantly better than those without. As things stand (according to the latest IPCC report): it looks like it. The models with external parameters have a high statistical significance, much higher than those without, even if they do not yet give us 100% certainty yet. Moreover, important factors (e.g. the exact influence of clouds and the conditions under which they form) are still insufficiently taken into account. So, on the one hand, the models need to be further improved, which due to the complex and chaotic geological dynamics on our planet is one of the most challenging scientific problems of all (climate scientists are currently working on the CMIP6 series, which adds a whole range of additional factors or models those even more precisely), and on the other hand, we need more data with which to test them.
Now, however, perhaps a completely new way of testing the models in their informative and prognostic value has emerged. Up to now, climate researchers have used their models to make statements about the long-term development of the global and regional climate (and then compared those with actually measured values): How does the temperature of the Earth’s surface develop on a global average? What are the expected annual rainfall rates in Central Europe over the next years and decades? Short-term weather conditions should not play a role here, as they are subject to too many coincidences, i.e. the concrete data are statistically too variable to be sufficiently meaningful. „Weather is not the same as the climate,“ as the saying of climate researchers goes, or: „Climate is what you expect in the long term, weather is what you get in the short term“. In North America, for example, temperatures may reach -37 degrees Celsius in October (as measured in Utah in 2019), whereupon „climate sceptics“ take pleasure in asking themselves (often very stupidly) „where climate change has gone”. Climate researchers from the ETH Zurich have however shown in an inspiring publication  that significant climate signals, i.e. the trend towards warming as described by the CMIP models can also be extracted from daily weather data such as surface air temperature and humidity! However, this requires a global analysis of the data: While regional data on a daily basis fluctuates too much and thus conceal possible signatures of climate change, variability on a global level reflects itself also in daily data. The global daily averages prove to be relatively stable. The extremely cold conditions in North America in October thus average out well with the unusually hot conditions in Australia at the same time.
If models have so described the long-termclimateeffects on a global and regional basis, they can also be used to describe the short-term weather effects on a global basis, this could be the tenor of the study, or as the head of the research group Reto Knutti puts it: „Weather on a global level contains important climate information”. Specifically, the Zurich researchers applied the so-called „fingerprint“ method: First, they used the climate models and their external parameters (specifically the CIMP5 models) to model annual global mean temperatures (AGMT) and a decadal average of Earth’s energy imbalance (EEI). They then trained regression models based on the same CIMP5 models to predict the AGMT and EEI data from the spatial patterns of daily surface temperature and/or humidity. The results were „maps of regression coefficients“ that show the relationship between global climate values (temperature and energy balance) and globally distributed daily weather data and thus represent a „fingerprint“ of the climate models used (this regression allowed the noise in the data to be suppressed and any signals to be better filtered out). These can then be compared with the observations. Finally, the researchers compared the results from the models with the external factors against those without external factors, i.e. with natural factors only.
The result was clear: climate change is indeed clearly visible in the daily weather data! According to the study, it was even visible in every single day since 2012 (with a 97.5% confidence level). Its signature can be easily detected with the available global climate models, which underpins the quality of those. Incidentally, this also applies if the mean global temperature rise is excluded. This is a major step forward in the development of even more meaningful climate models, which will become even more important in the political debate on climate change with the next IPCC report (scheduled for 2022). Although we still do not have 100% scientific certainty that climate change is man-made, the door is gradually closing for alternative, natural explanations. To wait until the confidence levels of the models are even higher before reacting politically would be completely unreasonable. In fact, would it not be more appropriate to act even if the models were to signal „all-clear“ on the effects of anthropogenic changes in our biosphere, but that with an uncertainty comparable to the one with which the models show us today that man-made climate change is taking place? It looks as if at least this question will soon no longer be needed to be asked. Sippel, S., Meinshausen, N., Fischer, E.M. et al. Climate change now detectable from any single day of weather at global scale. Nat. Clim. Chang. 10, 35–41 (Jan 2, 2020) doi:10.1038/s41558-019-0666-7