The learning curve has long been known to be able to predict the rate of progress in many areas. If one applies it to climate, the results are startling and suggest many centuries until we can predict even the yearly climate forecast as well as we can predict the weather today.
Note: this was written after the Royal Society meeting on climate and any references to “speakers” or similar refers to this meeting.
Scales of Variability
As fig 1 shows, global mean temperature variation in the instrument record increases rapidly when longer periods are considered approximately as follows:
For any change seen over periods of one decade, much greater scale changes are expected over longer periods of centuries but much less change is expected over the year to year scale.
Whilst the causation of this change in scale is not discernible from this graph, it does suggest an upper limit to natural variation which over the climate forecasting time-scale shows a strong increasing upper limit as we approach the century to century scale of forecast and strongly decreasing effect on shorter range forecasts. As Prof Palmer highlighted, Lorenz makes clear that “each scale of motion possesses an intrinsic finite range of “predictability” and the knowledge and skill drawn from day-to-day or even month-to-month month may reflect entirely difference physical phenomena than are present at the decade-to-decade and century-to-century scale. There is therefore little rational based on these different scales of behaviour to suggest that lessons learnt about modelling the physical processes that effect day-to-day changes in weather will provide much insight into the longer-term processes effecting climate. Climate models must be based on appropriate scale data. Month-to-month on month-to-month changes. Year-to-year on year-to-year changes, decade-to-decade on decade-to-decade changes and century-to-century on century-to-century changes.
The Learning Curve
Several authors (Guyon 1997, Cortes et al 1993) have proposed and justified theoretical and experimental learning curves of the form:
where l is the number of training examples and λ is such that it is 0.5 ≤ λ ≤ 1. The complexity parameter h can be determined experimentally by curve fitting. This gives rise to a curve similar to that shown in fig 2. This shows an initial phase of high errors but with rapid improvement, followed by a period of reduced errors but where progress is less easy with an ultimate limit to improvement due to some constraint. These constraints may appear fixed, but often they themselves are subject to improvement and often technological change such as the development of computers allows a fundamental shift resulting in a new learning curve with a new long-term constraint.
The present state of forecasting
Whilst there was no actual statement of the current state, table 1 broadly encompasses the various statements made by various speakers. Based on table 1 and what we know of the learning curve in that it takes a similar number of trials to raise the performance of a similar forecast to the same level, it is possible to make a prediction regarding the likely time to reach a particular skill level.
|≤ Daily||Weekly||Monthly||Seasonal||Yearly||≥ Decadal|
|Geographical focus||Local weather||Weather systems||→||Region||→||Global?|
|Testing||Well tested||Well tested||→||Tested||Some testing||< 3 trials|
|Utility||High for all users||Public: low||→||None||?||?|
|Professional: medium||Some for Specialist users||Becoming useful for specialists|
|Current state||Tested weather predictions||Frontier||Climate is difficult|
Table 1: The present state of forecasting
Numerical forecasting has been in use since the 1970s (Lynch 2008) however, even if we use the much shorter time since ensemble forecasting began being used around 1990 (Molteni et al 1996) we still find that the time to reach a particular skill level as shown on the left hand axis of fig 3 is exceedingly long.
Based on this graph, it will take till 2235 for the yearly forecast to be as good as the current monthly forecast some 2400 years till the decadal forecast and a massive 24,000 years for the century forecast to have come down the learning curve to be as good as today’s monthly forecast. One is tempted to suggest it might be easier to invent time travel than provide an accurate forecast for the next century, but even that would have its own learning curve. And all this is based on the assumption that the basic methodology is the same and the same general laws and physics apply but that the scale is different, which are often suggested as reasons why we should trust forecasts that have never been verified into the next century. In reality our understanding of the nature of learning that underpins the learning curve, strongly suggests that if we approach long term climatic forecasts in the same way as short term forecasts, far from being more certain of the answer, we are almost certain that we cannot predict with any reliability over these time-scales. Indeed, it is only if we fundamentally change our approach that we have any reason to suppose that we can do better than the limit to the standard numerical computational approach as implied by the learning curve.
Back to Science
Numerical modelling is used in economics, politics, marketing etc. So whilst it is a useful tool for scientists, it is not science (Chiara 1996 p.217). Numerical modelling is not a replacement for the hundreds of years of learning embodied in institutions like the Royal Society. Just as probabilistic weather forecasting has moved away from the “mental models” of frontal systems that used to be so key in communicating weather and weather uncertainty, so climate predictions are now largely projections of past trends without the detailed understanding or verification that is the necessary bedrock of science.
There is strong scientific basis to suggest that CO2 is a greenhouse warming gas and that doubling the level of CO2 in the atmosphere will lead to greenhouse warming of around 1°C (Curry 2010, Rahmstorf 2008) with others suggesting a range of 0.62°C (Harde 2011) to 1.2°C (Bony et al 2006). Even if the exact figure is still uncertain, we can be confident in this warming because it is based on verified empirical measurements based on hundreds of years of scientific knowledge. In contrast, there are suggestion of various “feedbacks” derived not deterministically from empirical science but by inference from numerical models and whilst these suggest larger warming up to 6°C of warming they have little empirical scientific basis and far from scientifically validated (Collins et al., 2006). Indeed there is strong evidence to the contrary(Spencer & Braswell 2011, Lindzen & Choi 2011, Allan 2011, Asten 2012). Given the known learning curve for numerical modelling of weather/climate such speculations are entirely unfit as a basis for policy decision-making.
However, whilst there is agreement on the 1°C rise even amongst climate “sceptics” (Haseler 2010b) that is not to say that we can say “climate change is limited to 1°C”, because we know from thousands of years of proxy climate record that the climate is inherently changeable (Stine 1996) and that indeed the only real certainty is that climate will change whatever we do. So, if we ignore the contentious political argument of causality, we find there is a lot of agreement as all knowledgeable commentators will agree that there is a real possibility of significant climate change over the next century. There is no learning curve with such an assertion, or to be more accurate, unlike numerically based predictions which require 100s of iterations and so will take 1000s of years to be useful, we are now so far down the learning curve in terms of human science due to institutions like the Royal Society, that we can be very confident in the accuracy of this prediction and indeed, reasonably confident of any empirical predictions although we should not discount some suggestions that better understanding will change even the 1°C figure.
Appendix – Predicted time to reach level of skill for various types of forecasts
The following table suggests the time it will take to reach a particular level of skill equivalent to the current daily, weekly and monthly (high, medium, low skill). So, e.g. based on the learning curve, it will take until 2132 to achieve the same number of prediction-forecast-appraisal cycles for the weekly forecast as the current daily one. Based on the same learning curve, and a a similar mix of modelling and numerical forcasting, this strongly suggests this is the best estimate when the weekly forecast will be as good as the current daily forecast. Likewise it will take till 2235 till the yearly forecast is as good as the current monthly forecast. In the extreme, this suggests it will be 730,000AD before we can confidently predict the climate with the same skill as the current daily forecast.
|Skill level of forecast.|
Table 2: Date by which forecast (rows) might reach stated skill level (columns). All figures except those in bold are approximate.
Note: This is an exert from my paper: Climate changes: the importance of supra-national institutions in nurturing the paradigm shifts of scientific development. Any references can be found in this paper.
Very subject to even slight variations in the assumptions & current figures but the best we’ve got.
Can similar methodology estimate when & at what level we will reach the top of the S curve of Moore’s Law, the rise in world energy use/gdp, human record speed, battery capacity, population etc. – measurements of human progress. I suspect we are at the foot of the curve except for population and maybe Moore’s Law. I recognise it is difficult/impossible to tell whether there will be a new S curve coming in at the top of the current one, though it is more important to have a minimum estimate for when progress will stop than how high it might go.
As you say, it is all we’ve got: you start from a pragmatic assessment of what has already been achieved and you predict forward from that. Instead they start by saying: “although we’ve never been able to forecast beyond one month – we know we can predict the climate beyond that.” In other words, they extrapolate from the unknown into the unknown.
I think fundamentally the learning curve is really a statement of how fast we can find out about the natural variations of this world. Big changes – we spot very quickly and so we can rapidly work out what the big things are – sun, moon etc. But smaller more subtle changes take time to spot and longer to work out their pattern. So with any human endeavour, we will tend to make rapid progress at first, and then as the necessary changes for improvement become more and more subtle, we take longer and longer to work out how we need to change to improve.
And guess what – the only area I’ve failed to see understand that their rate of progress is limited by the learning curve – is academia!!
“In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible.”
IPCC Working Group I: The Scientific Basis, Third Assessment Report (TAR), Chapter 14 (final para., 126.96.36.199), p774.
There is no way that a coupled non-linear chaotic system driven by an unknown number of feedbacks – and even the ones we do know, in some cases we don’t know the sign – is ever goinf to be amenable to prediction any significant distance into the future.
Even Moore’s Law isn’t going to fix that.
Then there’s that pesky butterfly, of course.
Ironically, the first man to point this out was Ed Lorenz – and we all know what he did for a living.
Pingback: Key Articles | ScottishSceptic