This is really a note to myself regarding Santer. “Separating Signal and Noise in Atmospheric Temperature Changes: The Importance of Timescale“.
If it doesn’t make sense … apologies.
The crux is the question: if you have a signal with noise, how do you tell what is signal and what is noise, and if you don’t know which is which, how could Santer come up with a figure for the ratio of signal and noise.
I’ve managed to get hold of a copy of the Santer paper which Pielke commented on and wanted to know whether I’m barking up the wrong tree. I looked in vane for the model for the climate noise they were using. Eventually it dawned on me that they weren’t modelling the noise. Instead it must go like this:
I want to know the signal to noise of my estimate of the number of apples. First. however tell me how many apples you have. Right, now my signal to noise is infinite because my estimate is precise. QED, I have a fantastic model which perfectly predicts the number of apples (after it knows how many to predict – and as it’s an exact prediction there is no noise).
Likewise I think this Santer paper is doing the same: they are creating a model to match the temperature signal and then assuming that the noise is any mismatch. If true this is how to put it … extremely “low signal to noise”. They are not distinguishing signal from noise, simply defining noise to be anything that isn’t post-factum a fit, which is why the S/N gets increasingly better, the closer it matches the whole sample period because it is increasingly easy to get an exact fit to the temperature series over the whole period with their models. In other words the S/N ratio gets better because the model has to fit the temperature series because that is what it is based on … which is not what the noise model suggests (see diagram above) if you analyse it as it increases with increasing periods.
This simply doesn’t get around the real question which is: “how much is signal and how much is noise”. Instead it really says: “everything we can’t explain by tweaking the model is noise” and therefore the signal to noise is what we can’t post-factum explain. This is absurd nonsense.
The real signal to noise is the deviation between the model and actual signal. So, e.g. we need to take models produced prior to 2001 and compare their “signal” which was around 0.35C/decade warming with the “noise” which was … around 0.35C/decade less, all the “signal” could be explained as noise. Then, if we back project this, we find the potential limit at this trend (given that trends can last centuries even millennium) is that the noise could be as high as 3.5C/century. This therefore suggests that the actual signal to noise over a century is as low as 0.25 to 1
In other words, far from proving that we have to wait 30 years to assess climate, what the Santer type approach shows is that models are so bad that all the signal could easily be noise.
If it doesn’t make sense … apologies.
The crux is the question: if you have a signal with noise, how do you tell what is signal and what is noise, and if you don’t know which is which, how could Santer come up with a figure for the ratio of signal and noise.
I’ve managed to get hold of a copy of the Santer paper which Pielke commented on and wanted to know whether I’m barking up the wrong tree. I looked in vane for the model for the climate noise they were using. Eventually it dawned on me that they weren’t modelling the noise. Instead it must go like this:
I want to know the signal to noise of my estimate of the number of apples. First. however tell me how many apples you have. Right, now my signal to noise is infinite because my estimate is precise. QED, I have a fantastic model which perfectly predicts the number of apples (after it knows how many to predict – and as it’s an exact prediction there is no noise).
Likewise I think this Santer paper is doing the same: they are creating a model to match the temperature signal and then assuming that the noise is any mismatch. If true this is how to put it … extremely “low signal to noise”. They are not distinguishing signal from noise, simply defining noise to be anything that isn’t post-factum a fit, which is why the S/N gets increasingly better, the closer it matches the whole sample period because it is increasingly easy to get an exact fit to the temperature series over the whole period with their models. In other words the S/N ratio gets better because the model has to fit the temperature series because that is what it is based on … which is not what the noise model suggests (see diagram above) if you analyse it as it increases with increasing periods.
This simply doesn’t get around the real question which is: “how much is signal and how much is noise”. Instead it really says: “everything we can’t explain by tweaking the model is noise” and therefore the signal to noise is what we can’t post-factum explain. This is absurd nonsense.
The real signal to noise is the deviation between the model and actual signal. So, e.g. we need to take models produced prior to 2001 and compare their “signal” which was around 0.35C/decade warming with the “noise” which was … around 0.35C/decade less, all the “signal” could be explained as noise. Then, if we back project this, we find the potential limit at this trend (given that trends can last centuries even millennium) is that the noise could be as high as 3.5C/century. This therefore suggests that the actual signal to noise over a century is as low as 0.25 to 1
In other words, far from proving that we have to wait 30 years to assess climate, what the Santer type approach shows is that models are so bad that all the signal could easily be noise.
Why didn’t peer review detect this I wonder ?
Warmists denegrate non-believers by pointing to ‘the science’.
‘THE science’ is a subset of ALL science.
Any non-complying science is the ‘noise’ to which you refer.
When you have a paper on the velocity distribution of nuts crushed by large metal hammers, no one asks the obvious question?
Why are you using a sledgehammer to crack a nut?
I think this cuts to the very heart of AGW – I mean, without assuming that the climate would be completely static – perhaps allowing for known issues like large volcanic eruptions – it was never going to be possible to test the role of CO2 in causing warming from studying temperature trends, unless and until those trends exceeded some threshold of reasonable climate variability, based on history. Hence the invention of the Hockey Stick Graph!
The root problem is that no institution is going to refuse to research at topic in return for a grant – even if there is no way to do the job honestly!
I’d love to know how wide this infection has spread – just how much of science is delivering meaningless results, and covering up the problem as best they can. The very fact that people like Sir Paul Nurse, seem willing to participate in this scam, suggests that the practice isn’t that uncommon!
My thoughts entirely. I suspect climate “science” is only the tip of iceberg in terms of the problem in science.