You are currently viewing Automation And Machine Learning: How Much Is Too Much?

Automation And Machine Learning: How Much Is Too Much?

At Principa we’ve become quite passionate about Artificial Intelligence and Machine Learning. Recently quite a bit has been published in the press about how automated machines should be allowed to get.  Most famously perhaps there have been the warnings from the likes of South African-born Elon Musk and theoretical physicist Professor Stephen Hawking.

“AI is the rare case where I think we need to be proactive in regulation… by the time we are reactive… it’ll be too late.” – Elon Musk

While we don’t anticipate our machine learning engine morphing into Skynet any time soon(!), there are nevertheless very important questions that we are tackling at the moment.  While I won’t cover all of these issues in this post, I do want to talk about a couple of them.

“Explainability” – a credit example

The first issue is around “explainability”.  For those who have built scorecards before you will be aware of this notion.  For example, if I was to build a scorecard for credit applications, a popular characteristic to include would be “age of applicant”.  Most scorecards would recognise a monotonic relationship – i.e. typically, the older you are the better risk you’re likely to be. This is an easily explainable trend if you appreciate that older people are typically more financially secure than younger people. So if we build a scorecard and the resultant model represents this trend then we would happily accept this characteristic into the scorecard.

Conversely, we may find very different trends (albeit predictive and possibly stable when checked against the hold-out sample).  If we are unable to explain the trends then either the characteristic will be rejected from the scorecard, or the attribute groups would need to be re-classed into “explainable” groups.

When we chat to clients about these approaches the vast majority (particularly those subject to heavy degrees of compliance) agree that all trends should be explainable. There are others that refute this as an “argument from personal incredulity” and tend to trust the trends observed (subject to the trends validating against a hold-out sample).

For credit models, we tend to take the conservative (former) approach, but this approach is difficult to implement within a machine learning environment (how do you model “common sense”?). That is why we manually check our models for the unexplainable once the machine has retrained the model.

“How do you model common sense?”

Ensembling

Another approach that we have employed is that we tend to deploy our models as ensemble models (i.e. incorporated with a previous tried-and-tested model). That might mean that we take 80% of the original Generation 1 score and 20% of the Machine Learning (Generation x) score to create an ensemble score (subject to both scores being scaled on the same score/odds scheme).  In that way, we can ensure confidence that our new models will add some lift, but will not create unwanted instability. Other ensembling approaches are also employed.

For the 1st time in human history we are beginning to develop tools, the workings of which no-one understands.

It’s both frightening and exciting. While we are very excited about what machine learning is bringing to the market, we are cautious and employ a level of manual assessment and belts-and-braces in each of our ML assignments.  However, as our applications get wider and we delve into more complex data where it is very difficult to fully understand each trend, then what do we do?  Our philosophy is to develop or determine an approach project-by-project but always take a conservative fall-back position.