You are currently viewing Will Your Machine Learning Models Pass The William Tell Test?

Will Your Machine Learning Models Pass The William Tell Test?

Machine learning models can be used very successfully in many different contexts to predict outcomes for different use cases accurately. These predictions can be used within the business to make better decisions or to operate more efficiently (or both) and can give you an edge over your competitors. Predictive models all follow the same recipe – i.e. train a model on historical data and then apply this model to unseen data to get predictions. If your model generalises well, you have a prediction that you can trust and use to decide “do this, not that” with some degree of accuracy.

Machine learning in financial services

In the financial services sector the most common requirement is to predict binary classifier outcomes – i.e. predicting a yes/no, True/False or a 1/0 outcome. Some examples include answers to these typical questions:

  • Shall we grant this applicant a loan?
  • Will this customer pay back their facility?
  • Will this customer attrite and move to a competitor?
  • Will the right customer answer this call?
  • Will this customer take up this new product?

There are many techniques out there that can provide varying levels of accurate predictions – e.g. logistic regression, support vector machines and neural nets. Principa has tried various techniques over time and we are seeing good results with the gradient boosted algorithm approach. This is a machine learning algorithm that is often the winning algorithm on the open competition website, Kaggle.com. There are numerous internal parameters that can be configured to fine-tune your model, plus it is fast (especially the Python libraries ‘XGBoost’ and ‘LightGBM’) and their lightning fast speeds during the training phase allows one to run more experiments in the time you have available, giving you a better chance of finding the optimal tuning parameters. Check this out for a great illustration on how gradient boosted algorithms work:

XGBoost

XGBoost is happiest when the positive (e.g. responders) and negative (e.g. non-responders) classes are well balanced – i.e. you have around a 50% response rate. However, in our experience, this very seldom occurs. Take modelling fraud for example – there are generally very few positive classes (or fraudsters) on which to model, and this is often referred to as rare event modelling. This is quite an extreme case, but we still struggle with imbalanced classes, like response modelling or predicting a right-party-connect where the RPC rate is only around 1%. One can force balance in the algorithm by tuning the scale_pos_weight parameter. This will give you a good model that will separate the positive and negative classes quite nicely, but the problem with this approach is that that the resulting probability is going to be scaled incorrectly. So the RPC scores that fall in the 1-2% range are not going to average out at 1.5%, it will be something quite different. This is fine if you only want the model to help select the top records – i.e. you want the best 1,000 records out of a possible 10,000.

However, if your business strategy relies on the pin-point accuracy of your model’s predictions, then this approach is not going to work for you. Fortunately, xgboost has many parameters to choose from that can be used to fine-tune the construct of the underlying algorithm. One of these parameters, the max_delta_step parameter can be used to great effect to give accurate point predictions in the case when the target variable is imbalanced.  We can show the impact of this in the views below using a right party connect (RPC) use case as the target that we want to predict. The first view shows a good model by tuning the scale_pos_weight parameter – the Gini coefficient for this model is a healthy 68.8%. But notice how poorly its prediction accuracy is (the blue line does not follow the perfect or unicorn model’s green line in the second graph). When we tune the max_delta_step parameter, the model still separates the two classes nicely (with a Gini coefficient of 68.5% that is very close to the original model) AND gives good overall point prediction. We have seen real-world success following this approach on a few use cases now. If you would like skillful and reliable models that give you accurate predictions, contact us.

Leave a Reply