Natural Variation

Of all the many issues that confuse climate researchers, natural variation, or as they in error call it “error” is one of the most important. Indeed, the very name “error” from “to err” etymologically related to “erracy” or heresy, shows that they have a black and white view where there is (believed) “truth” and “error” which is quite unscientific.

Heat

To explain the problem with the concept of “truth” and “error”, it is best to start by using a simple analogy: When is heat not heat?

Most of us understand the concept of heat as the random movement of atoms or molecules. So, when is this random movement not heat? Let us suppose that we take a single atom at the energy equivalent of 1K (or a suitably low temperature) and we arrange for it to impact into a hot gas (1000K?) such that it hits one atom and loses all momentum and therefore has no energy and an equivalent temperature of 0K. What is also clear is that if this is “heat” then a colder body at 1K has warmed a hotter body of 1000K.

The atom at 1k was moving in a way that was indistinguishable by observation from that of a gas at 1K and this cooler “body” warmed a hotter body in contravention of the laws of thermodynamics. How?

The reason was that although the atom was indistinguishable by observation from an atom in an ensemble at 1k, it was distinguishable by definition. Because we had defined the system in a way that was not an ensemble of random atoms. Heat is not a physical property of a system, instead it is property determined by the system definition – and that definition must define the system in such a way that we do not know the individual energies (or at least apply our work to the whole ensemble). For “work” and “heat” are not different forms of energy. Instead “work” is energy which can be quantified completely, whereas “heat” is energy which is randomised in a way that its specific physical form or distribution is unknown (or treated as unknown).

To use another example. IR energy is often referred to as “heat”. This is because we often experience the “heat” of the sun via IR. So, in layman’s terms “heat” can refer to IR, but from a point of view of physics, IR is not heat any more than kinetic energy is heat (the energy of moving atoms is kinetic) . Yes, in a hot gas, heat is present as both kinetic and IR energy which is moving both within the gas and back and forth to any container. So, the IR within the gas is “heat”, but, if we expose the gas to a non-randomised energy source from outside in the form of IR, then whilst the layman might call this “heat”, and whilst it may be indistinguishable from internal IR, the applied energy isn’t heat but in thermodynamics terms it is “work”. Likewise, if we create a window from the ensemble to the outside, whilst the energy comes from heat, in thermodynamic terms it is work being done by the heat on the environment.

Natural Variation

Natural Variation and heat are similar concepts, in that they do not exist as a physical entity, but exist by virtue of the definition we apply. To see what this means, let us take a simple system whereby we measure the height of the sea.

Let us suppose that the height we measure at a point in time is 3m (above a convenient datum). Now let us suppose we take another measurement 1hour later and the height is 3.5m. If we use a simple model of our system which says that sea level is constant, then the “natural variation” is 0.5m. In other words there exist variation that has perturbed the system which is not accounted for by a model by 0.5m. This is NOT an error. We can reasonably say that any error in measurement is much smaller than this. Instead, this is a discrepancy between our model of the system and the system which includes variations that are naturally present but not present in our model.

However, if we used a more complex model, which include tides, then let us assume the tidal model suggested that 1hour later the tide should be 4m. Now, with a reading of 3.5m, the natural variation is not 0.5m but -0.5m. To say “natural variation” is what we don’t know or even worse “an error”, is patently false, because anyone who has ever seen the sea has a fairly good idea what is likely to make the sea precisely one hour after the first reading slightly different from our tidal model. The answer is waves. We know they exist, but unlike the regular rise and fall in tides, the exact height of the sea surface 1hour ahead would be almost impossible to know (unless we were considering something like a tsunami and even then not exact).

However, if instead of 1hour, we chose a timescale of 1-10sec. Then given the pseudo regular behaviour ofÂ  waves, we would have a fairly good chance of a reasonable model of the height of the sea surface at a particular location. As such waves exist in an in-between world whereby in some circumstances (short periods) they can be modelled in a fairly precise deterministic way. But as the time increases, the ability to predict the precise height of the water disappears, whereas we still know the amplitude of variation.

And by analogy, there are many other forms of variation from the atmospheric swell that occurs when low pressure causes sea to rise, land uplift or sinking, to instrumentation noise, which can all be modelled to a greater or lesser extent, such that depending how complex our model for the system becomes, the “natural variation” can be reduced till it is almost negligible.

So, natural variation certainly is not noise, and it certainly cannot be “averaged out”. For part of the natural variation in sea level height is the long-term change in sea level due to the rebound from the last ice-age.And over longer periods we might also include tectonic plate movement etc.

Natural variation is not an entity, it is not an error, it is instead the expected variation that occurs because our model for the system will not perfectly match nature.

Climatic Natural Variation

Like sea level, temperature changes for a host of different reasons, some easy to model, some impossible to model and some that can be modelled over limited periods.

The obvious changes which are relatively easy to model are the change from day to night and from summer to winter. These follow relatively predictable behaviour. The changes that are difficult to model are those over very extensive time periods such that the arrangement of continents have changed the behaviour of the climate in a way that cannot be understood or tested (we only have one earth) or indeed solar changes (due again to lack of understanding about long-term behaviour). In between are various pseudo modellable behaviours from that of weather fronts which can be modelled over a period of days or even weeks to oceanic cycles like El Nino, which have known effects, but cannot be predicted with any certainty even within a single cycle.

But again, what is considered “natural variation” depends on our model. We may, for example, talk in terms of effect on geology of the “natural variation” in temperature. In this context, it does not matter whether the variation is day to night or summer to winter, or frosty day to cloudy. The rock does not care what causes the temperature change, only that there is change. But when predicting weather for the next day, now we model the behaviour of time of day, time of year and the effect of fronts and air movements. Now “natural variation” are the parts of the climate that our weather models cannot or do not include. In part these are complexities that escalate like the butterfly effect. In part they are instrumentation error or variation that exists due to the discrete nature of the weather stations such that they cannot measure the exact position of fronts etc.

But, like waves on the sea, these weather fronts can be modelled over relatively short periods, but over longer periods all we know is that they perturb the temperature from an average, but on any day a long time ahead, we couldn’t hope to predict whether there will or will not be a particular high or low area at any particular place. Thus whether these weather systems are considered “natural variation” or not, is highly dependent on the time frame. It’s the same physics, the same physical process, but depending on what we’re trying to model they may or may not be “natural variation”.

Likewise, El Nino. Again, depending on the time-scale, whilst we know the scale of the effect, we may or may not be able to model even the sign of the effect on temperature at any specific time in the future. However, El Nino is just one of many many, dare I say, an infinite number of such “cycles” in the climate. Cycles that could over some periods be predicted, but over long periods can not. Thus even, if we were to exactly model El Nino, the Atlantic Multi-Decadal Oscillation and ALL the major ocean perturbations, there will still exist natural variation consisting of those minor perturbations which we have not been able to model. Likewise, the effect of solar, the effect of clouds, the effects of meteors, of animals and humans affecting the climate though changes to vegetation. Geology in changing sea levels, in changing the height of mountains, of volcanoes, etc. etc.

Natural Variation is not noise that can be cancelled

Natural variation is not a “noise” that can be cancelled out by a long series of measurements. The “noise” of slowly hydrogen slowly leaking from our atmosphere does not “average out” by taking more and more measurements, because the “noise” is a trend. Likewise, the effect of the changes that occur due to the ice-age cycle cannot be “averaged out” by a lot of readings …. at least within one human life time. Instead it would require millions of years of data. Likewise the Atlantic Multidecadal Oscillation takes around lifetime, but there may be much longer cycles still to be discovered that even if we average for a whole life-time only appear within that lifetime as a trend.

Instead, natural variation is what we have not (yet?) included in our models. Some of that may be known variation, which is predictable in amplitude and the exact change but has not been included. Some may be variation whose scale is known, but not its exact amplitude at any time (some distance into the future). And other is variation which neither the scale nor amplitude is known, but instead we can see its affect as a variation that cannot be otherwise explained.

Truth and Error

Often when climate academics talk about the failure of their models to match the physical world, they use the term “error” to account for the failure. This concept is taken without consideration from that of laboratory science. In laboratory science the intention is to create a controlled experiment where the behaviour can be controlled so that it very closely matches theory. As such the concept is that the experiment should behave according to theory except that there is always instrumentation “error”which causes the exact readings to vary from theory. As such the “error” is in the readings and not the theory. In addition, it is usually assumed that instrumentation error can be averaged out (even though long term drift is always present). As such the “error” is a mistake of what is being measured, which tends to zero with more measurements. It is not a mistake in the model.

In contrast, the climate is not a system that can be modelled with any hope of accuracy. As such the variation between the model and the real world is not due to a failure of instrumentation (even if part is), instead it is due to natural variation: the discrepancy between the real world and the model. That is the model of the climate misses out many key factors that prevent it modelling the earth’s climate and as such if the concept of “error” is relevant is is that the error is in the model and not the measurement. As such, what climate academics refer to as “error” includes many things that could be modelled like cycles (El nino), trends (like the 1970s desertification of the Sahara) & one off events (volcano) which in theory could be modelled (historically) but not in the future. But there are many smaller perturbations that whilst smaller in scale are present in such numbers to cause significant change which through their shear number could not ever be totally modelled. As such, even if the known variations were included there would always exist an “error” in the model.

As important, many of these variations are trends (or at least appear trends over a human life-time). As such the concept of “averaging out” to remove them does not work. As such the climate models will always be in error both due to cycles and trends. However, also the instrumentation readings will also have “error”. What then is the “truth”?

“Errors” of global temperature

Behind the idea of “global warming” is the concept that there is a “global temperature”. There are serious questions about whether there is any meaning to this this term, but as this subject has been widely discussed, I will not cover them again. Instead I will just take that there is a “global temperature”. This might be supposed to be the “truth”, but how does this “truth” match to physical measurements? In order to do this, a model has to be constructed of how station temperatures respond to this “global temperature”.

But now there is also natural variation present between this “model” and the theoretical construct of a “global temperature”. Again, the concept of an instrument “error” is often used to refer to the believed difference, which presupposes the idea that like instrumentation noise, the “error” in estimating global temperature can be “averaged out”. However, there are huge systematic changes in temperature, such as urbanisation which cannot be averaged out. And there is introduced trends inserted into the model such as “time of day” changes which account for almost all the perceived warming.

And here is where the terminology of “error” is particularly confusing. Is the warming trend added onto the global temperature due to believed changes in “time of day” of measurements an error, or is the error in the original data? In the US this change accounts for all the warming since about 1940. Does this mean the original data is “in error” or is the modelled global temperature with this added trend “in error”?

The problem is that “error” implies that there is a truth and this does not work in this situation. To illustrate the problem, what is the “truth” if we look at the probability distribution of radio-active decay? If the average is 14 counts, is a count of 10 “in error”? Is it an error to only get 1 count? It may be highly improbable, but it will happen, and when it does it is not an “error”, but instead part of the natural variation.

The problem with “error” is that it the inference is that one thing is in error from another which is “true”. In contrast “natural variation” is a concept that only says there is a difference.

This makes it easier to talk about variations. If our model is that all stations respond equally to global temperature, then “natural variation” is a term for instrumentation error (+ errors in this model). If however, our model is that all stations respond to global temperature, urban heating and time of day changes, then “natural variation” is a term for the variation of calculated reading from a theoretical concept of “global temperature” which includes instrumentation error, errors in assessing “adjustments” and error in the model. We don’t need to know whether the model or the readings are “in error”, because natural variation exists whether the readings or theoretical model are correct

Predictive, measurement models and “truth”.

In the climate, we have models, which for simplicity we will use on with only variable (e.g. global temperature). But a predicted model is not the only model.We also have a second model which is how global temperature is constructed from instrumental data. These would correspond to an laboratory experiment on radiation that for example radiation drops as the square of the distance (predictive model) and that radiation can be measured by the average (measurement model).

However, there is also a third conceptual model. This conceptual model is of the “true” global temperature. This is why the measurement model can be said to be in “error” with what is believed to be the “true” value of global temperature – even when this “true” value cannot be obtained. In addition, the predictive model is also in “error” with this “true” global temperature.

However, what is this “true” global temperature. The actual global temperature is actually several thousand degrees because the bulk of the earth below the crust is very hot. Even if we take the “true” global temperature to be that at the surface, does this mean the air just above the ocean or the ocean itself? Because as anyone who knows about wet & dry bulb readings will know, the temperature of a moist body is not the same as the surrounding air. Even if we take the reading at 10m above the surface level, does this mean 10m above the tree canopy? Even if we define it to be the ground, does it mean with or without radiant effect of the sun? And even then, does it include of include the heat from human activity? And even if we pin down the definition, how do we cope with the numerous places where there are no readings?

This is why trying to define a “true” global temperature and then defining anything that fails to represent this “true” reading is not a helpful approach. It implies that there are “errors” from the “true” value, which by its nature cannot ever be measured and so cannot ever be proven.

Instead, we can model something that we call “global temperature”. And we can use this “global temperature” as something that we attempt to predict its behaviour. It isn’t the actual “true” global temperature, but we can at least assess the “natural variation” that exists between predicted and measurement models. This is something that can be assessed, it can be measured, and so unlike “error” it is something that is scientifically testable.

This entry was posted in Advanced Greenhouse Theory, Energy, science. Bookmark the permalink.