How To Avoid The Texas Sharpshooter Fallacy In Data Analysis

The rise of Big Data, data science and predictive analytics to help solve real world problems is just an extension of science marching on. Science is humanity’s tool for better understanding the world. The tools that we use to build models, test hypotheses, look for trends to build value with our brand all derive directly from scientific principles.

With these principles comes a myriad of obstacles. The obstacles are known to philosophers as “logical fallacies”, which I outlined in my previous post “The 7 Logical Fallacies to avoid in Data Analysis.”  In this blog post, we focus on the Texas Sharpshooter Fallacy and how to avoid it in your data analysis.

What it is the Texas Sharpshooter Fallacy?

This is a common mistake made by human beings. In essence, it is looking at a large amount of data, identifying small patterns and deriving a conclusion based on the patterns.

The name derives from a story of a Texan marksman who shoots a large amount of bullets at a barn door. He then finds the closest cluster of bullets and draws a target around them and thereby claims that he is a sharpshooter.

How to avoid the Texas Sharpshooter Fallacy

Post-hoc hunting of anomalies and patterns is commonplace in data analytics. There is no real problem with identifying patterns in data through an observational study, but this should result in a hypothesis and not a conclusion. Hypotheses should then be tested against another set of data. To extend our metaphor, the marksman should, after drawing his target, go back and take aim to see whether he can hit the target again.

This is partly why we use hold-out samples when we build models (e.g. we build a model on a random 80% of the population, but will then test the model against the 20% hold-out).

The reality is that all data will have anomalies and we can hunt for these, but we should not rest our conclusions based on these anomalies, we should rather test our hypotheses about the anomalies on hold-out samples, out-of-time tests or new tests.

This logical flaw is well known in applied physics and epidemiology. Certain studies known as “observational studies” may be conducted to look for anomalies in data. These anomalies may be presented, but a conclusion is not drawn as the independent variable is not controlled for. A follow-up study would be a randomised controlled trial to determine whether the results of the observational study could be replicated.

The Texas Sharpshooter fallacy is just one of many statistical pitfalls to avoid in data analysis. Read my post on the 7 Logical Fallacies to avoid in Data Analysis. I’ll be covering each logical fallacy covered in my initial blog post on this topic – The 10 Logical Fallacies to avoid in Data Analysis – so make sure to subscribe to our blog to read the new posts in this series.