A year ago, I published an article about motivated reasoning and how that can damage the data analytics process. It is part of a blog series on cognitive biases and logical fallacies that data analysts should avoid. Today I’d like to extend this conversation into a topical matter: p-hacking, also known as data fishing.
To understand p-hacking, let’s look first at the p-value?
Leaving statistical jargon aside, we use the p-value to understand how confident we should be when observing a sample. The challenge comes into play when the observations from this sample are extrapolated across the entire population. You may have seen this in election poll surveys
where a small set of observations are used to determine/infer how a population voted, the p-value indicates how significant a conclusion may be. Essentially here, it would be related to size of sample relative to population and the size of the trend (percentage break-down of votes).
Now when we observe a study (medical trial, crime statistics, voting stats), we should always look out for the p-value. However, in recent times statisticians have made note that many studies that site “highly reliable” p-values (usually a p<0.05) have been subject to a dubious activity of “p-hacking” or data dredging.
What is p-hacking?
P-hacking is the exercise of fishing through data looking for correlations that have a p-value (below your threshold) and reporting this correlation as statistically significant (without assessing causality).
P-hacking example:
- We run a campaign offering new cell-phone contracts to a large group of individuals (let’s say 100,000)
- A proportion take up the offer (let’s say 1,000)
- We then look back at the demographic information (let’s say 300 demographic fields) on all 100,000 individuals to see whether anything looks predictive of the customer taking up an offer.
- We happen to find that those with kids between 5-7 are three times more likely to take up an offer than the rest of the population.
- The p-value associated is measured at 0.01 (i.e. there’s a 1-percent chance that this trend is an anomaly)
A p-value appears impressive. But the reality is that the analyst has made a crucial mistake. Although 1% appears small, the analyst has actually dredged each of the demographic fields until he/she has stumbled upon what is actually an anomaly. If reported as a determinant factor, then this is p-hacking.
As a point of interest. If there are 70 fields with a p-value of 0.01 (or less), then there is more than a 50% chance we will find at least one field that is correlated due to “noise” in the data. For p-value threshold of 0.05 (or less), then the number of fields is just 14.
How do we prevent p-hacking
Firstly, the trend in the example may indeed be true, but to know this, we need to do other tests. One such test would be to keep a hold-out sample to determine whether the trend holds true in that sample too (we do this as standard when modelling).
While machine learning techniques may give you quick models, unless sufficient attention towards validation is given, you may end up with models built on anomalies.
“As data scientists, we need to be continuously aware of the risk of fooling ourselves.”
Another worthwhile consideration is to try and understand the logic behind the relationship (i.e. “is there any logic that might explain a consumer with kids aged 5-7 being so much more likely of purchasing our product that others”) – if not, then exercise additional caution.
Another consideration would be to stay abreast of other industries to witness how they may deal with such issues. In the medical words “observational studies” are done to look for correlations or patterns. A hypothesis (not a conclusion) is then made, and further studies are done to determine whether the hypothesis holds. This is done not as a retrospective observational study, but rather a forward-looking statistical test.
“False facts are highly injurious to the progress of science, for they often long endure; but false views, if supported by some evidence, do little harm, as everyone takes a salutary pleasure in proving their falseness; and when this is done, one path towards error is closed and the road to truth is often at the same time opened.” – Charles Darwin “The Descent of Man” (1871)
As data scientists, we need to be continuously aware of the risk of fooling ourselves. Feel free to fish for trends, but don’t be lured by the bait of seemingly significant results. Express your results responsibly. For more on this, I can recommend this excellent article in Nature and the embedded video from Veritasium.
For more information on how Principa’s Data Scientists might help your organisation, get in touch with us.