As part of our blog series on cognitive biases and logical fallacies that data scientists should avoid, today we address a prevalent logical fallacy: the “correlation proves causation” fallacy. Correlation due to causation is just one of the five main categories of causation, and this blog will look into each of the five.
The reason we are running this series of blogs is to highlight critical thinking within the workplace and particularly in data-science. The discipline of metacognition (thinking about thinking) is essential in ensuring that when it comes to using data to lead you to the truth, you can trust what the data is telling you. This is particularly relevant in the so-called post-truth world. (Click to Tweet!)
Epistemology: the study or a theory of the nature and grounds of knowledge especially with reference to its limits and validity [Merriam-Webster]
In a previous blog on motivated reasoning, I showed how analysts could fall into the trap of searching for evidence to support a pre-held belief. Through a variety of statistically sloppy practices including the file-drawer effect (publication bias), Texas sharpshooter fallacy, p-hacking, self-fulfilling prophecy, confirmation bias and cherry picking a conclusion can be drawn from incomplete or biased data. The correlation-causation is one of the more familiar fallacies.
“Correlation means causation” is just one of the five main types of correlation. We explore each of the five and how to identify them.
Correlation implies causation?
While it’s frequently said that correlation does not imply causation, it is not entirely true as in an observational study, correlation indicates the possibility of a causal link. The difference is that a further study/experiment needs to be conducted to determine whether this is true.
An observational study is what happens when an analyst looks retrospectively at data. A common problem is determining what conclusions can be drawn from the data. When motivated reasoning drives an analyst – it is quite common for that analyst to make an absolute determination. Instead, they should look to form hypotheses and then look to conduct (if possible) randomised trials to control for the independent variable. Alternatively, if they can control for that variable (through drawing out cohort groups – and conduct a study thereon) – they can draw much stronger hypotheses.
Reasons for Correlation
So when an analyst observes a strong correlation – it’s important to recognise that there could be many reasons for the correlation. I’ve listed the five main reasons with examples and steps to identify the cause.
Marker for another causal variable (W causes X; W causes Y)
This is also known as the “third-cause fallacy”. Two data fields appear correlated – it would be tempting to infer a causal link between the two, but in fact, there is a third common variable that is causing the other two.
The wine example above is an example of this type of correlation. Another rather prosaic example would be a correlation of umbrella purchases and lightning strikes. Common sense would dictate that umbrella sales don’t cause lightning strikes (or vice versa). The third common variable is stormy weather which causes both variables.
Indirect Causation (X causes Z which causes Y)
The correlation between two factors may not indicate a direct causal relationship. There may be an intervening variable at play.
An example of this might be a frequently studied association of consumption of tea and lung cancer reduction. Many low-quality studies show strong inverse correlations between tea consumption (10 cups/day) and the onset of lung cancer. However, drinking 10 cups of tea per day means there is less time to smoke. Other studies have controlled for this factor through cohort analysis and have determined that consumption of tea for non-smokers/ex-smokers is less correlated.
To avoid indirect causation, it is worth doing multivariate (instead of univariate) analysis. This will ensure that variable “Z” and variable “X” are analysed together. Similarly, cohort analysis on observational studies may assist too. For time-series data one can utilise Granger causality testing which gives a reasonable indication of direct causality.
Direct Causation (X causes Y)
Direct causation sometimes referred to by its Latin name “post hoc ergo propter hoc” (after therefore because of) is what we are ultimately interested in with predictive analysis. Just because a variable comes after action, does not mean that the action produces the variable.
While this may be obvious, it is worth noting another sub-category to look out for, and that is the cyclic causation scenario. Here Y may cause X too. The often cited example of this is the predator-prey population relationship (modelled using Lotka-Volterra modelling). Here the predator population grows with an increase in prey population; but conversely the prey population will decrease with a large predator population.
Back to correlation due to causation.
In the credit risk space, having a deep understanding of the confounding data environment is essential. For something as simple as setting a good/bad definition for application scorecards, the performance of a good/bad account can be affected by many things, not only poor credit-risk behaviour. A common phenomenon in SME lending is that a business applying for working capital that is denied and then fails as a business does not necessarily imply that the decision was correct. It could be that a working capital loan could have supported the company to succeed.
Similarly, in behavioural scoring, we may predict an account has a high probability of being good in 12 months. As a result, we decide to aggressively market to the account pushing the account holder over the limit. The result is that the customer defaults on their loan. Did we underestimate the risk of the client or was our action too aggressive?
The fourth correlation type is pure coincidence. Humans are notoriously bad at understanding probabilities and we are consciously seeking patterns and are prone to confirmation bias. (Click to Tweet!) When we stumble upon a pattern or correlation, we may then be tempted to jump to the conclusion that causation is at hand. There are countless correlations out there. I stumbled across the website www.Tylervigen.com with some hilarious examples including the one here!
When viewing correlation in an observational study, you should make hypotheses to test later. When modelling you should keep hold-out samples or out-of-time samples on which to test the correlations.
Sometimes the reason for correlation may not be known. Technically (scientifically) nothing is known 100%, but for some analysis, it may be too premature to draw any conclusion regarding reasons for correlation.
For more on how scientists fool themselves, I can recommend this article from Nature.