Rebirth of Reason


Problems with Instrumentalism
by Joseph Rowlands

Instrumentalism is a idea in the philosophy of science that says a scientific theory should be evaluated on how well it allows predictions. This is in contrast to other views, including that a theory should be judged based on how accurately it describes reality. As long as the theory provides a basis for making accurate predictions, the question of whether it is true or false is irrelevant.

There are some obvious concerns about this approach. Can a theory really predict results accurately if it doesn't really describe reality? If the theory diverges from what's real, it should result in some kind of failure in making a prediction. In terms of its own standard, the ability to make accurate prediction, it can be argued that instrumentalism performs worse than an approach focused on understanding and explaining the real world.

Another problem is that an instrumentalist approach is an intellectual dead end. It is much like explaining events or effects by saying that God did it. Once that answer is accepted, the investigation concludes and no further answers are possible. Instrumentalism differs in that it claims no explanation is needed instead of giving a mystical one. But the results are the same. Instead of looking for deeper truths or causes, the investigation is concluded.

A different problem is that it's not clear that instrumentalism can actually be practiced consistently. By trying to focus on appearances instead of causal relationships, it has to produce ever more complicated models for how things work. Complex problems are usually understood by dividing them into smaller problems. A combination of many different factors each contribute to the overall effect. Imagine a pool table with the balls struck and are rolling in different directions, bouncing off walls, and hitting one another. The way to understand these complex events, or to make predictions, is to divide the total effect into many smaller causes and effects.

But if you view prediction as the goal, and aren't trying to describe how reality is actually working, there's no reason to consider breaking up the problem in that way. Why not try to analyze the problem as a whole? Why not try to develop a predictive model based on the total number of pool balls? Why look at factors like velocity, mass, how elastic the collisions are, or even location? These may seem like obvious points to start with, but that's coming from a viewpoint that expects causality to hold, and attempts to look at the various factors we might expect to play a part in the causal reaction. But if you claimed reality isn't real, or even that you are going to act as if you are agnostic to it and only focus on predictions, it's not clear why any of these factors would interest you.

Sure, once you have come up with a causal explanation that includes these factors, you can derive a mathematical function to express the conclusions. You could even claim that you don't really care if the math accurately reflects reality and that you are only concerned about predictive capability. But all of that happens after the fact. If you started with that premise, things might play out differently. You might create a math function based on just the number of pool balls. When multiple tests are made, you might try to modify the results based on other factors like what time of the day you hit the balls, or what the numbers on each ball added up to, or any number of other nonsensical approaches.

Scientists will usually try to isolate a single factor to see how changing it will change the results. This allows them to surmise that there is a causal relationship. But this approach is based on an assumption of causality. The instrumentalist might use the same technique, but for him this approach is not justified. Why would he try to isolate a factor if he doesn't think that causality is important or real. His standard of predictability suggests that there is nothing valuable in an assumption of causality and unrelated experiments might be just as good in terms of predictability or maybe even better?

This isn't to say that instrumentalists necessarily reject causality. They may or may not. What's important is that they claim theories are valuable based on predictive ability, but they smuggle in the concept of causality when formulating the theories or design the experiments. They assume there are factors causally affecting the results, and they look to those to make improve their predictive models. The first criteria used in selecting/choosing a theory is causality, and only if that fails do they fall back on the idea that instrumentalism determines the better theory.

Imagine a case where some phenomena is not well understood in terms of causality, but can be reasonably predicted despite the lack of explanation. This would be case where instrumentalism would be viewed as superior to approaches focusing on describing reality and explaining causal effects. Despite a lack of understanding of how it actually works, it can still make predictions.

So what happens when a prediction fails? You'd have to decide whether the failure came because of a new factor that changes the results, or because your predictive model was flawed in some way. In practice, a scientist would expect the problem to be a new factor that wasn't considered. This is because he would likely believe that the earlier relationships described have been confirmed to some extent. There's always the possibility of an error and they would admit that, but that's not the issue. When there are many possible factors impacting the results, the factors or theories that are most confirmed would be the last to be rechecked. The ones with little or no confirmation would be first to be checked. So if an experiment failed, it would be irrational for the experimenter to assume that gravity had failed, or the conservation of energy wasn't true. He would look for more likely factors.

Can instrumentalism have confirmation? In some sense, you can claim an instrumentalist predictive model is more and more confirmed with each experiment where it predicts correctly. But is that a valid view of confirmation? Confirmation occurs only when a theory risks being falsified. Repeating the same experiment over and over does not add to confirmation.

But can a purely predictive model that has no correlation with reality take risks? In a sense, it does because it is possible that it won't predict accurately. But how do you know you are taking a risk? How do you know you know that you aren't just repeating the same experiment? Popper's view of a falsifiable theory seems to require a theory to provide an explanation or causal relationship. Only by understanding why it is supposed to be happening can you test whether that explanation is real. But the instrumentalist approach rejects explanation and causation. It is perfectly compatible with abstract mathematical models that say nothing about why it works the way it does. And that means that genuine confirmation can't happen because there's no way to know if the theory is ever taking a risk or not.

We can look at a simple example of objects falling to the ground. We could explain the behavior through a theory of gravity, or we could take an instrumentalist view and just describe the event. If we repeatedly dropped objects all day long, we wouldn't really be putting the theory of gravity to the test. If, on the other hand, we flew into space where gravity is reduced and try dropping things, we could tell whether the theory held up or not. But the instrumentalist method of just describing the events would not provide any such causal explanation. There would be no reason to think of going into space and performing an experiment since the cause of the phenomena is ignored in favor of a predictive model. So how well confirmed is the predictive model? Turns out that dropping things all day long isn't really confirming anything, whereas dropping one thing while on the moon is a major confirmation.

Without a real method of confirmation, you're left with a numeric method. Whichever predictive relationship has been observed the most times is the strongest theory. This is not true, and would lead to mistakes. But whether it was a bad view of confirmation or no view at all, it would lead to a scientist looking for an error in all of the wrong places.
Sanctions: 12Sanctions: 12Sanctions: 12 Sanction this ArticleEditMark as your favorite article

Discuss this Article (1 message)