| | One criticism that can be made of any book is that it isn't the book I would have liked it to be. That's not a real fault, but let me describe one of my issues anyway. The book focuses on predictions, including the psychology and accuracy involved in them. That's great. The part he doesn't write about is whether some methodologies are better than others. The fox is better than the hedgehog because he brings more knowledge to the table, but mostly he doesn't try to describe where a person might have gone wrong when they were making predictions. Was it simply too complex? Or was there something fundamentally wrong with their beliefs about how things work.
Take for instance the prediction made by the Obama administration about where unemployment will be if the government doesn't pass the huge stimulus bill. They presented two numbers, one with and without. In reality, unemployment shot beyond both predictions, even though the stimulus passed.
Now the book (which I think mentioned this example) would discuss the confidence projected by these predictions, and how they ended up being wrong. A 'fox' probably would have given a more reserved answer, knowing how complex the prediction is. And that's interesting if we just take the predictions at face value. But what if there's something fundamentally wrong with the tools they used for prediction. What if the inaccuracy wasn't due to a lack of data or unpredictable future, but instead was due to a problem with their theory (i.e., government spending doesn't crowd out private enterprise).
I'm completely willing to accept the main thesis of the book, that predictions are crazy hard and the best you can do is accept the complexity and try to live with it. Maybe I'm a fox at heart. But I would still be interested in the reasons why a prediction succeeded or failed. Successes are trouble, though. There are so many people making predictions that some are bound to be right, even for the right reasons. He talks about Peter Schiff, for instance. Anyone who saw the "Peter Schiff was right" will likely be impressed. His reasoning was sound, he was confident under attack, and he called it when nobody else seemed to see it. But the author goes on later to describe many of Schiff's previous predictions over the years, and how far off he was, and the fact that he's been essentially making the same points for decades. When someone makes an accurate prediction, we tend to think there's something there. But when tons of people are doing it....well, broken clocks are right sometimes. So even if someone gets it right, and has sound reasons for it, it doesn't mean that they are good at predicting or the event was really predictable. The 'hit' is self-selecting, and all the people out there that had sound reasons for their predictions, but missed it, don't get factored in. There could have been hundreds or thousands or millions of things that could have changed the outcome, and if one had, we'd be looking at someone else as if they were the genius that latched onto the most important factors.
Anyway, as I said, I agree with the author. But for me, the methodology is important. What made the failed predictions fail? Was it simply the difficulty of making predictions when so many things can change the outcome? Or was there something really wrong with their reasoning. In many cases, I think it's the reasoning that's unsound.
One reason I bring this up is because economics makes predictions of sorts. Mainstream economics makes all kinds of concrete claims, like unemployment will be at such and such and GDP will grow by whatever. The Austrian approach rejects that methodology. In a way, it is in line with the book. There are too many factors, including free will, and predictions are necessarily highly speculative. But that kind of economics is usually based on a faulty statistical approach. Relationships that don't really exist are measured anyway, and predictions are made assuming the statistical relationship will hold in the future. These approaches inevitably fail when big changes happen, as those are always unpredictable based on this method of "more of the same".
The Austrians use a different approach. They point out causal relationships. They would point that minimum wage increases will mean that marginal employees will no longer be hired or maintained, probably increasing unemployment. The prediction isn't that unemployment will go up, or by how much. The theory only points out the causal relationship and would argue against a minimum wage hike because the actual effect, if there is any, is to create unemployment.
So the prediction isn't the same kind as one of these statistical ones. It accepts that there are many other factors in the economy, and doesn't try to ignore them. It only shows how this change in minimum wage changes the incentives. And so for the same reasons that bold, confident (and inaccurate) predictions are preferred, economic approaches that make more concrete predictions are preferred.
This also ties into Objectivist morality as well. We use principles to make predictions about the future. Obviously we can't control everything, or know every possible consequence, but we can use causal principles to choose actions that should be preferable. For instance, we might think that being dishonest will cause several major problems including the fear of being caught, work taken to avoid getting caught, etc. We might also see the advantages of the truth, such as that it creates trust and allows people to act more optimally because they have accurate information. We aren't predicting exactly what the outcome is, but we are picking actions that we believe will lead to better results.
So the missing piece in this book, which obviously the author wasn't trying for, was a discussion of whether some methods of prediction are better and why.
|
|