It is truly fascinating to see the political method of addressing the technological issues of calibration and uncertainty analysis applied to the topic of climate modeling as it concerns man made climate change.
First of all, there is the slow realization that those issues are of real fundamental concern. Does mankind line up behind theories that are uncalibrated and presented without competent uncertainty analysis? That is a separate issue -- a political issue -- and there would be no political concern at all if the answer wasn't at least partly 'not so fast sparky.' And so, the slow political realization that it is necessary to politically address calibration and uncertainty analysis, even if that doesn't necessarily mean address those issues in a technological fashion. A political fashion is entirely sufficient for political reasons.
I read a recent paper that purported to address climate model calibration. It was entirely about something else -- model tweaking, which is, adjusting the inputs of a given model to match a particular set of outputs. That isn't calibration. But it was presented as calibration. These politico jackasses twist the truth constantly to try and snow technical illiterates. Model calibration is, applying the model without tweaks to a range of inputs and verifying that the model outputs reproduce known outputs. Over time, it may be necessary to 'tweak' the model in order to consistently realize accurate results, but if that is forever, then the model is not really modeling anything; it is just hiding a means of converting an incomplete set of inputs into a desired output via tweaks.
So, take for example, weather modeling. A model is run, predicting tomorrows temperature fields and so on. Tomorrow arrives, and the model results can be calibrated against ground truth. It either predicted the model outputs accurately or it did so within a certain error bounds. For any given model run, the results can be 'tweaked' -- after the fact -- to reduce those error bounds for that given outcome, but then, there is a new set of inputs, a new tomorrow to predict. Will those same tweaks result in an accurate prediction of tomorrow? Weather models can be calibrated every single day, and get better over time, and still, state of the art is of the order of 10 days, maybe, with a high degree of uncertainty on the out days. Consider hurricane track predictions; they are presented as 'spaghetti plots' because that is what they look like. Not just different models, but different ensemble runs of the same model. And this is with frequent opportunity for model calibration.
Now. compare with climate models. How are they calibrated even once, much less, every single day? Think about that problem. Climate is at least as complex as the weather. We either accurately know the inputs (the parameters that we claim drive climate) today, or we know the outputs ('the' global temperature) today. If we reach back into time, we can only infer the inputs. The state of the oceans? We have a hard time catagorizing the state of the oceans thermocline distribution right now, today, much less a hundred years ago.
So I ask you to ask yourself the question; how are climate models calibrated, like weather models? The answer is, politically; by writing articles about 'climate model calibration' that are really talking about 'climate model tweaking' -- forcing a model run to line up with a desired outcome.
And then there is the -science- of uncertainty analysis. It is a science. It is not voodoo. It is not 'a monster' -- as it is described in one political science scholarly article purporting to discuss, read very carefully, "ways to talk about uncertainty" on the topic of climate modeling.
An equation is a model of some process. It may or may not be an accurate model of that process. It may be an incomplete, or partial model of that process. It might be mathematically consistent, but it might not accurately reflect reality. It requires calibration, discussed above. But beyond that, it can always be subject to the science of uncertainty analysis. An equation has inputs on the right side of the equation, and outputs/results on the left side. A given form of the equation mathematically propagates uncertainties in the inputs into a corresponding uncertainty in the outputs. With many equations, it is possible to mathematically calculate these component uncertainties attributable to each input using the method of partial derivatives. It is then possible to combine the component uncertainties into an aggregate uncertainty, and report it either as a 'worst case' uncertainty(just add them all up with the same sign) or 'Root sum square' uncertainty, which is a more probable uncertainty(meaning, if the component uncertainties can all vary w.r.t. sign and magnitude, then it is not likely that they would all happen to have the same sign at the same time, and so the worst case uncertainty is an extreme example, not a typical answer. RSS is a more 'typical' uncertainty.
If the equation is very complex and does not lend itself to easy analysis based on partial derivatives, it is always possible to just vary the inputs one at a time by plus or minus uncertainty and note the corresponding change of output(s). (It is assumed it is possible to evaluate the equation, which is the whole point.)
A computer model is not a single equation; it is a system of equations. In the case of the climate modeling debate, one that purports to spit out a number called 'the global temperature' after a lot of data destruction. It is assumed it is possible to evaluate these models. And so, it is possible to do the same thing -- vary the inputs one at a time, by plus or minus uncertainty -- and observe the change in the range of outputs. So even with complex models, it is possible to report that basic definition of output uncertainty. "We claim to know the yncalibrated model inputs with these uncertainties, plus or minus, which result in a range of model outputs of this value with this uncertainty, defined as RSS uncertainty, or this uncertainty as worst case uncertainty."
That is still separate from the issue of calibration, but it gives decision makers a means to assess the significance of the results. Unless of course the political goal is to hide the significance of the results.
So how is 'uncertainty analysis' addressed politically? By referring to it as 'a monster' and instead of simply just doing the math -- I just outlined the process -- the acolytes instead stir up and muddy the water by painting the -science- of uncertainty analysis as "Fear, Uncertainty, and Doubt" mongering. Nonsense. It is science. And to claim otherwise is political science, period.
I am not providing links to the "uncertainty monster" paper that goes on for pages about how to 'talk about uncertainty in climate modeling' without ever actually talking about the science of uncertainty analysis, or "tweaking is calibration" papers. They are crap. The field is full of this political science crap. I'm done promoting crap.
(Edited by Fred Bartlett on 6/10, 5:29am)