Oracles and Science: The Trouble with Predictions

“Mortagne-reeds and elms refracted in the ball” by Mo is licensed under CC BY 2.0

We all want to know the future, but what is the best way to predict what will happen? Assuming we don’t have a crystal ball or a time machine, we have to find patterns in the available information and use that make our best, informed guess. This is what scientists do.

There is a spectrum of things we like about science. For one, science discovers fascinating things. On the more practical side though, people want science to provide reliable predictions. These predictions help people solve or avoid problems. This isn’t that different from the oracles of the ancient world. The difference, however, is that scientists need to be very systematic and open about how they came to make their predictions.

As diligent as a scientist may do these things, there are two nagging problems when it comes to the public making use of the information in scientific predictions. First, we know that there is usually variability in a system, which could lead to the actual result deviating from what was predicted. Through statistics, scientists try to account for this variability and produce quantified descriptions about our confidence in our predictions. Unfortunately, these aren’t always very useful because of our difficulty in processing numerical probabilities. The second problem is that the problem of induction could potentially pop up and surprise us. When this problem does rear its ugly head, it compromises our predictive ability, including the reliability of our descriptions about confidence.

Let’s start by discussing some of the troubles we have with understanding probability. One of the problems is that it is easy to believe something with a small probability is not likely to happen. Even with an extremely small probability, if there are sufficient opportunities for the event to occur, then it becomes likely to happen sometime. Another example of how our intuition interferes with understanding probability is our tendency to believe past experience affects the likelihood of future events. In gambling, it isn’t uncommon to think that a number that hasn’t appeared for a while becomes more likely to appear in the future. It would be equally fallible to assume that because a pair of dice summed to seven for the past ten rolls, the next roll will also sum to seven. Compounding issues like these is our tendency to misperceive probabilities by overly focusing on anecdotes. Because we tend to filter out less dramatic events, it is very easy for us to form confirmation biases. Having those biases can lead to someone even rejecting the results of more systematic data analysis because of conflicts with their perceived understanding.

It should also be noted that the resolution of a prediction and our confidence in it are inversely related. The oracles were infamous for providing vague predictions, but this increased their odds of being right! To use the classic dice as an example, I can more confidently predict that the next roll of two dice will sum to between five and nine, than I could predict the next roll will sum to exactly seven. As scientists serving society, we try to give the highest resolution predictions possible with the information we have. The marker for our limitations in that resolution is generally the acceptable uncertainty or risk.

Even if everyone grasped what the estimated probability of something meant, there is still the issue of not knowing if we know everything. As Donald Rumsfeld described it, “there are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know.” This may sound like some kind of philosophical loop, but it’s really a practical recognition of our limitations to predict what we will find in the future. For example, say your friend wants to have lunch with you and have tomato soup. You suggest a certain restaurant because you make the prediction that it would have the best, most fresh tomato soup that day. You make this prediction because it is Thursday, and for the past two years, every time that you’ve gone to that restaurant on Thursdays the soup of the day was the best, most fresh tomato soup. But when you and your friend arrive, you find the soup of the day is chicken soup; there is no tomato soup. Was your prediction flawed? Not really. Based on the information you had, your prediction fit the pattern. Unfortunately, there were factors that you were not aware of. Maybe there was a shortage of tomatoes that week or the chef’s child was sick last night and they didn’t have time to boil down the tomatoes. This is an illustration of the ‘problem of induction.’ Essentially, induction is a great procedure for formulating ideas about how our world works, but it can’t account for situations that we haven’t encountered yet. So should we just give up on predicting the future? Probably not, but we do need to keep things in perspective.

If the public does not understand the process by which scientists make predictions, we risk scientists being viewed as oracles. For a time this could be a positive relationship, but we know in science that eventually there will be surprises. To someone who understands the scientific process, surprises are exciting opportunities to improve understanding. However, to someone who is relying on scientists’ predictions like one would the predictions from an oracle, only one mistake is enough to be discarded as a false prophet. Thus it is important for everyone to understand that as good and useful science is, there is always a chance of a prediction based on current scientific knowledge to be wrong. This should not reduce the confidence we put into science. Instead, it should simply moderate our perception of it. If science didn’t lead to reliable predictions, we wouldn’t have the technology that has produced the quality of life we enjoy today.

For more on related topics, take a look at:

Bonus: Here is an example of the vague language I think should be avoided when making predictions. Maybe the audience doesn’t like hearing the possibility that a prediction can be wrong, but avoiding analysis and using overly vague language helps no one. Tips from The Enterprise System Spectator.

Leave a Reply

Loading...