Skip to main content

Machine learning (ML)  in credit risk is not new. There is an increasing body of evidence demonstrating its superior ability to assess credit risk.

Yet only a handful of institutions in the UK have moved from proof of concept to actively deploying ML in credit decisioning.

Here we look at three issues that may be behind the delay and offer another point of view.

1 Concerns over lack of transparency, increased risk of bias and regulatory sanctions.

Last year’s Credit Scoring & Credit Control Conference had 21 presentations showcasing the substantial uplift ML offers over current industry practice – but only 4 related to an actually deployed solution.

Many institutions seem hesitant about introducing this in practice. In the past, that hesitancy was caused by concerns over issues such as lack of transparency and interoperability. Leaders feared not being able to explain decisions to customers, breaching regulatory compliance, making biased or unfair decisions.

Response

These arguments no longer hold. At the conference above, for example, seven presentations set out robust evidence on good fairness of ML, while a further six presented highly usable explainability tools.

The Bank of England and the FCA have now published several reports inviting the industry to help define best practice standards for advanced credit risk models. There is no regulatory sanction against carefully designed and controlled ML credit risk models.

2 It is all too much effort – this is not a true top priority

There is no doubt most leadership teams do now recognise and endorse the use of advanced ML in credit decisioning. Funds are invested in state-of-the-art decisioning engines for orchestrating ML solutions. Dedicated data science teams are set up to build powerful ML credit risk models.

However, for the vast majority, operational deployment of advanced ML is new territory. As such there will be unknown risks, issues and operational defects to fix. Almost for certain, more than one implementation attempt will be needed. It is very tempting to give up after the first signs of an implementation bug. There are, the argument goes, more important things to get on with.

Response

It is precisely when deployment doesn’t go smoothly that it should be made a priority. Leadership teams who understand the importance of this need to give it sustained attention.

Understandably, there are competing priorities. There might be a long queue of credit strategy changes fighting for limited resources. There might be new product launches, pricing updates, system ‘change-freezes’.

Implementing ML is likely to involve a great deal of analysts’ time and effort to find defects, build and test fixes, system engineering time to re-deploy. This can be seen as interfering in “business-as-usual” operation.

But it is important in those situations to focus on the end result and goal. That is, better and fairer business and customer outcomes that ML credit risk technology delivers. As it becomes the norm, those that are behind the curve risk losing customers who seek a better service whilst they themselves continue running a costlier operation.

3 We gave this project the go-ahead – but then it got stuck

Even when the business leadership does decide to adopt ML techniques – there can be a disconnect between the various layers and departments who are charged with putting this into practice.

Technology providers present their platforms as plug-and-play, ML-ready solutions. However, the reality is often quite different. Sales playbooks conveniently gloss over the most challenging elements of implementing an ML credit risk model. Doing this well involves working together from an early stage.

Response

Early system engagement is vital. Detailed deployment design needs to happen even before the first development data record is collected. System engineers need to be fully integrated in the model-build process, working side by side with the data scientists.

System engineers now need a far greater understanding of ML scripting procedures and code base. Gone are the days of a model developer handing over a Word document to be coded with simple arithmetic and if-else statements. ML models are deployed through pre-coded procedures, often written in open-source.

An ideal end-state is of course to have a highly-skilled ML Ops team in charge of model orchestration. Most organizations are not yet equipped with this new skillset. However, this should not delay operational deployment of advanced ML credit risk models.

Strong early engagement between data scientists and system engineers, including an element of education, can, and does, allow successful implementation of models which deliver transformational value in business and customer outcomes.

Conclusion

Globally, it is clear that machine learning in credit decisioning is already becoming an established technology. Institutions in Asia, the US, and some in the UK and Europe are already using it successfully.

Some reasons for putting this off are understandable – it is not a walk in the park. Glitches can be off-putting. But it is important to stay the course.

The long-term objective is vital to the survival and growth of the business. Advanced credit risk ML offers transformational value, be it in bad debt reduction or fairer customer outcomes.

by Dan Blagojevic PhD

Head of Customer Analytics & Decision Science