[an error occurred while processing this directive]
About
Content
Store
Forum

Rebirth of Reason
War
People
Archives
Objectivism

Post to this threadMark all messages in this thread as readMark all messages in this thread as unread


Post 0

Tuesday, June 10 - 4:34amSanction this postReply
Bookmark
Link
Edit

It is truly fascinating to see the political method of addressing the technological issues of calibration and uncertainty analysis applied to the topic of climate modeling as it concerns man made climate change.

 

First of all, there is the slow realization that those issues are of real fundamental concern. Does mankind line up behind theories that are uncalibrated and presented without competent uncertainty analysis? That is a separate issue -- a political issue -- and there would be no political concern at all if the answer wasn't at least partly 'not so fast sparky.' And so, the slow political realization that it is necessary to politically address calibration and uncertainty analysis, even if that doesn't necessarily mean address those issues in a technological fashion. A political fashion is entirely sufficient for political reasons.

 

I read a recent paper that purported to address climate model calibration. It was entirely about something else -- model tweaking, which is, adjusting the inputs of a given model to match a particular set of outputs. That isn't calibration. But it was presented as calibration. These politico jackasses twist the truth constantly to try and snow technical illiterates. Model calibration is, applying the model without tweaks to a range of inputs and verifying that the model outputs reproduce known outputs. Over time, it may be necessary to 'tweak' the model in order to consistently realize accurate results, but if that is forever, then the model is not really modeling anything; it is just hiding a means of converting an incomplete set of inputs into a desired output via tweaks.

 

So, take for example, weather modeling. A model is run, predicting tomorrows temperature fields and so on. Tomorrow arrives, and the model results can be calibrated against ground truth. It either predicted the model outputs accurately or it did so within a certain error bounds. For any given model run, the results can be 'tweaked' -- after the fact -- to reduce those error bounds for that given outcome, but then, there is a new set of inputs, a new tomorrow to predict. Will those same tweaks result in an accurate prediction of tomorrow? Weather models can be calibrated every single day, and get better over time, and still, state of the art is of the order of 10 days, maybe, with a high degree of uncertainty on the out days. Consider hurricane track predictions; they are presented as 'spaghetti plots' because that is what they look like. Not just different models, but different ensemble runs of the same model. And this is with frequent opportunity for model calibration.

 

Now. compare with climate models. How are they calibrated even once, much less, every single day? Think about that problem. Climate is at least as complex as the weather. We either accurately know the inputs (the parameters that we claim drive climate) today, or we know the outputs ('the' global temperature) today. If we reach back into time, we can only infer the inputs. The state of the oceans? We have a hard time catagorizing the state of the oceans thermocline distribution right now, today, much less a hundred years ago.

 

So I ask you to ask yourself the question; how are climate models calibrated, like weather models? The answer is, politically; by writing articles about 'climate model calibration' that are really talking about 'climate model tweaking' -- forcing a model run to line up with a desired outcome.

 

And then there is the -science- of uncertainty analysis. It is a science. It is not voodoo. It is not 'a monster' -- as it is described in one political science scholarly article purporting to discuss, read very carefully, "ways to talk about uncertainty" on the topic of climate modeling.

 

An equation is a model of some process. It may or may not be an accurate model of that process. It may be an incomplete, or partial model of that process. It might be mathematically consistent, but it might not accurately reflect reality. It requires calibration, discussed above. But beyond that, it can always be subject to the science of uncertainty analysis. An equation has inputs on the right side of the equation, and outputs/results on the left side. A given form of the equation mathematically propagates uncertainties in the inputs into a corresponding uncertainty in the outputs. With many equations, it is possible to mathematically calculate these component uncertainties attributable to each input using the method of partial derivatives. It is then possible to combine the component uncertainties into an aggregate uncertainty, and report it either as a 'worst case' uncertainty(just add them all up with the same sign) or 'Root sum square' uncertainty, which is a more probable uncertainty(meaning, if the component uncertainties can all vary w.r.t. sign and magnitude, then it is not likely that they would all happen to have the same sign at the same time, and so the worst case uncertainty is an extreme example, not a typical answer. RSS is a more 'typical' uncertainty.

 

If the equation is very complex and does not lend itself to easy analysis based on partial derivatives, it is always possible to just vary the inputs one at a time by plus or minus uncertainty and note the corresponding change of output(s). (It is assumed it is possible to evaluate the equation, which is the whole point.)

 

A computer model is not a single equation; it is a system of equations. In the case of the climate modeling debate, one that purports to spit out a number called 'the global temperature' after a lot of data destruction. It is assumed it is possible to evaluate these models. And so, it is possible to do the same thing -- vary the inputs one at a time, by plus or minus uncertainty -- and observe the change in the range of outputs. So even with complex models, it is possible to report that basic definition of output uncertainty. "We claim to know the yncalibrated model inputs with these uncertainties, plus or minus, which result in a range of model outputs of this value with this uncertainty, defined as RSS uncertainty, or this uncertainty as worst case uncertainty."

 

That is still separate from the issue of calibration, but it gives decision makers a means to assess the significance of the results. Unless of course the political goal is to hide the significance of the results.

 

So how is 'uncertainty analysis' addressed politically? By referring to it as 'a monster' and instead of simply just doing the math -- I just outlined the process -- the acolytes instead stir up and muddy the water by painting the -science- of uncertainty analysis as "Fear, Uncertainty, and Doubt" mongering. Nonsense. It is science. And to claim otherwise is political science, period.

 

I am not providing links to the "uncertainty monster" paper that goes on for pages about how to 'talk about uncertainty in climate modeling' without ever actually talking about the science of uncertainty analysis, or "tweaking is calibration" papers. They are crap. The field is full of this political science crap. I'm done promoting crap.

 

regards,

Fred

 

(Edited by Fred Bartlett on 6/10, 5:29am)



Post 1

Tuesday, June 10 - 9:10amSanction this postReply
Bookmark
Link
Edit

Another point about the science of uncertainty analysis, and why it is such a useful scientifiuc tool.

 

It is often, even usually the case, that there is more than one equation or model available to calculate some entity or property of interest.   An example is the 'efficiency' of some process.

 

It is usually modeled as the ratio of the desired benefit to the required cost, measured in simular units(so they can ratioed.)

 

So for example, 'useful energy or work out' ratioed to 'required energy or work in.'    When comparing alternitive schemes to realize a given desired benefit, we tend to value the scheme that is most efficient/costs us the least to achieve the desired benefit.    As long as all actual costs and benefits are accurately modeled, this is generally a useful metric to guide our choices.

 

 

But there is usually more than one method of calculating either 'desired benefit' or 'required cost', especually in terms of energy.    For example, one scheme might involve measurement of shaft speed and torque into some machinery, where an alternate scheme might involve measuring the change in state of a fluid going through the same machine, like a compressor(work in) or turbine (work out).    The desired metric is 'efficiency' but there are alternate methods of calcuating efficiency, and each method has its own inputs, and uncertainty on those inputs, as well as formula/equations for calculating efficiency based on those inputs.

 

By applying the science of uncertainty analysis, an intelligent choice can be made between competing schemes to measure 'efficiency.'    The scheme with the smallest uncertainty in the outcome is the most desirable.    The scheme with the smallest uncertainty depends not only on the propagation of input uncertainties, but on the achievable within budget input uncertainties.    The combination of those analyses give us a reasoned basis to intelligently choose among available schemes to realize the required uncertainty in our measurement.  (And in fact, assessing the uncertainties of complex gas properties model code is often a part of those analyses.)

 

And, so it is with things like competing climate models.    WIthout an assessment of their relative sensitivity to uncertainty, we are lacking an important metric.     We still have the problem of calibration; a model can still be wildly inaccurate with a low sensivity to input uncertainty.    But the combination of uncertainty analysis with calibration can point us in the direction of the most reliable , most cost effective tools to achieve what we are after.   

 

Without that analysis, we are just winging it with model code and numbers, which is adequate for political science purposes-- that is, to browbeat technical illiterates with cargo cult science-- but not any reality independent of manufactured agenda driven reality.

 

WIth some irony, those who mention 'uncertainty analysis' in the context of this debate are accused by politicos of 'ignoring science.'   Totally to be expected in what is really a poltical debate, where black is white, up is down, and left is right as a matter of course.

 

regards,

Fred



Post 2

Wednesday, June 11 - 12:35amSanction this postReply
Bookmark
Link
Edit

So are there any actual scientists with integrity ignoring the political crap and developing a working model?  



Post to this thread
[an error occurred while processing this directive]


User ID Password or create a free account.