For a while, we have been running a blog series on cognitive biases and logical fallacies that data scientists should avoid. In philosophy there are a host of informal logical fallacies – essentially errors in thinking – that crop up every day. In this series we have looked at the practice of data science to determine how these same fallacies also occur.
The first is the Monte-Carlo fallacy (also known as “gambler’s fallacy” or “fallacy of the maturity of chance”).
Simply put this is the assumption that the longer a run of independent events occurs, then the more likely that the run will likely to be broken. An example of this is a roulette wheel. A run of five red doesn’t mean that it is more likely that black will be spun the next spin. Or if the number 5 hasn’t been selected in the lottery for 10 weeks, this does not mean that it is more likely to be selected than if it had come up the previous week.
In data analytics, the understanding of independence is often critical to avoiding this. Does the chance of a repeatable event have any link to a/the previous event?
Monte Carlo fallacy in action
Random data is clumpy. When we randomise a population, we find clumps of data. When viewed it may appear that the data is not random and human nature has a tendency to ignore independence and have us gamble on a change (regression to the mean used as an excuse). For independent events or data with large N – this tendency to bank on a change is incorrect.
A real-life example, cited in a paper from the University of Chicago found that when underwriting loans if an underwriter was given a streak of suitable applicants, he/she would be more likely to be stricter with the next application. The same is true with judges adjudicating asylum seeker applications and baseball umpires. The loan officer example involved a bank in India where manual decisions were made on all applications. In certain cohorts, a loan officer was shown to be 10% stricter after multiple approved applications. The study was conducted with numerous incentive schemes, and in each case an element of the gambler’s fallacy was evident. Another study shows how the Monte-Carlo fallacy is present in stock market trading.
The opposite of the Monte-Carlo fallacy is the “hot-hand” fallacy. It takes its name from basketball free-throws where, it is claimed, that if a player had hit many successive free throw points, then he/she is more likely than normal to hit their next one. It has been shown that when controlled for, “hot-hands” does not really exist; although a further study indicates that there may be a slight effect in certain sports. The fallacy still exists though and has some non-sporting examples.
The Hot-Hand Fallacy in Action
The Hot-Hand fallacy is particularly evident in investing. You may find an investor following tips from a certain guru due to a string of successes. In the mortgage world, pre-2008, the tendency to invest in mortgages in SA and abroad due to the ever-appreciating “no-risk” market, meant that many investors became over-exposed in property.The market over-heated and the collapse, particularly in the US sub-prime market, saw streets of foreclosures, repossessions with massive shortfalls and banks with mortgage portfolios being valued at single-finger percentages compared to a year ago. This was when credit saw the hot-hand fallacy in the most egregious form.
These studies just show how human psychology can manipulate us into making poor decisions. The intelligent use of data and models allow for better decision and over-rule the cognitive biases featured in human decision making.