Blogs
artificial intelligence
lending
machine learning
big data

Fact over fiction: How lenders can create real business value with AI/ML

Chitwan Kaur   /    Content Specialist    /    2022-01-13

LinkedInLinkedIn

The words “artificial intelligence”, “big data” and “machine learning” are everywhere today, but how many of us can actually explain what these mean? AI has been touted as the authoritative solution to an array of problems spanning industries. But behind this jungle of jargon, the plot seems to be lost. 

Instead of explaining the intricacy and nuance of its usage, many companies have simply promoted their AI capabilities in broad, self-congratulatory terms. Not to mention, their marketers have played fast and loose with the technical words.

In truth, AI is not the mother lode of solutions to data processing problems and comes with its own set of shortcomings. But it is a formidable tool to have in today’s ruthlessly competitive business atmosphere. 

It supplements non-intelligent computing and the limited ability of traditional data analysis. In lending, especially, AI has been harnessed to –

  • Make underwriting more accurate, 

  • Make credit decisioning lightning-fast and free from human error,

  • Prevent and detect fraud, and

  • Enhance customer support.

In fact, those selling AI also vouched for its ability to weed out biases that have long plagued lending decisions. 

AI’s own advantages, coupled with the daunting challenge from big tech to enter the banking space and an overwhelming focus on customer experience accounted for its widespread adoption. As far as business is concerned, these capabilities could translate into better revenues, lower costs and expansion into new markets. AI is all set to open lending up to $1 trillion annually. 

Before buying into it, businesses should be able to identify the costs and benefits to derive tangible gains from AI. They must also lay a blueprint for balancing business value with the greater good – of ushering fairness into lending – with the use of AI.

The limitations of AI

Machine learning engines work on large volumes of baseline data fed to them by humans. They detect patterns and relationships among the thousands of data points, training themselves in the skill of prediction. So, the quality and quantity of this data are crucial to their success. 

Biased data

Perhaps the most caustic criticism of ML or other AI-based engines - they are only as good as the data being fed to them. If this data retains historical biases around age, gender, religion or others, no matter how efficient the underwriting model, it’ll keep perpetuating the same prejudices. 

But lenders can rewire their engines to ignore data points that could impact a borrower’s creditworthiness because of inherent biases. Say, a lender removes gender from consideration while underwriting. The predictive nature of the engine, however, will pick up on the next best proxy for gender. Although unintentional, this results in the same discrimination.

Two solutions emerge – the data can either be scrubbed clean of these biases before it is fed to the engines, or lenders can redesign their models. But correcting the data makes the lender liable to make the process and its rationale transparent. As for altering the underwriting models, the process can be expensive.

Moreover, if a lender eliminates a variable in the interest of fairness, they risk running into a loss. 

For example, the race variable is removed from a model. While it is the fair thing to do, loans approved to applicants who are a product of their race’s socio-economic oppression may never be retrieved.

Trade-offs between accuracy, fairness and the business viability of AI models are a stark reality. In their pursuit of fair credit decisioning with AI, lenders can sometimes end up compromising on business value.

Model drift

Digital lending is a dynamic space. ML models used by digital lenders can degrade over time because actual production data keeps changing from the data used during training. 

For example, models designed before the COVID-19 pandemic may have been rendered useless as ensuing job losses affected the repayment ability of large numbers of creditworthy borrowers. Similarly, applicants could find new ways to circumvent the model’s fraud detection capabilities to get illicit loans.

When it goes unchecked, model drift can hamper lenders’ ability to make correct predictions and expose them to credit risk.

Creating real business value - with AI and more

There is ample evidence to suggest that AI has tremendous potential in creating business value. IBM research pointed out that over 85% of advanced adopters reduced their operating costs with AI across supply chain and production, and improved process efficiency. But as established above, shortcomings in data have also dented several business gains for lenders. 

Despite this, the staggered pace of AI/ML adoption in lending has been lamented by artificial intelligence evangelists. In practice, however, the Promised Land is still a long way to go. So, lenders and fintechs need to refocus their approach in a way that delivers maximum business value. Here’s how –

Set business priorities 

AI cannot be a solution looking for a problem. The prospect of deploying AI in lending can be exciting for fintechs, but without aligning it with business priorities, it can be an expensive and often, futile exercise. Digital lenders must clearly define these priorities and match them with the best AI/ML use cases. This helps in building models that are not only best suited to address relevant problems, but also create value when deployed at scale.

Build explainable ML models

Financial services are highly regulated. The Reserve Bank of India in its November 2021 working group report recommended that algorithms used for underwriting in digital lending should be auditable. This means that ML models should not just deliver outcomes, but be able to explain the rationale behind their predictions. 

Digital lenders should be able to spell out their reasons for rejecting individual applications, while also offering explanations for their overall default rate projections. These could include instances of previous defaults, employment statuses and payments missed. 

Regulators are more likely to accept transparent ML models that publish their results on dashboards and apps. Users also tend to take desirable actions like timely EMI payments when they trust these models.

What we do at FinBox

At FinBox, we derive real business value through use case-based application of available capabilities – AI or otherwise. Our solution-driven approach ensures that we prioritize best-fit over force-fit of all our resources. 

Team organization

We have highly cross-functional teams focused on working towards the same goal, instead of divisions along superficial lines such as areas of technical expertise. This gives us a lot of flexibility to work with various technologies and programming languages to deliver customized solutions. In other words, business informs tech decisions and not the other way around.

Custom-fit AI use cases

Unlike most companies where AI is restricted to R&D teams, our data analysis team works closely with clients from start to finish and beyond. In fact, our ML models are trained on feedback received from clients.

We recognize that AI has its limitations. ML engines can run with only a certain degree of accuracy because data is constantly evolving and models themselves deteriorate. Because precision is crucial to lending, we deploy a combination of AI and simple data analytics usage to ensure no blind spots remain exposed in our underwriting and ID verification processes.

Rigorous training

We have trained and continue to refine our ML models on billions of data points and across a pool of 16 million new-to-credit borrowers to mitigate model risk and keep up with the changing landscape of digital lending.

Takeaway

Viewed from a business lens, indiscriminate application of AI across lending use cases is impractical. The technology is, in many ways, still finding its feet because –

  • Vetting big data and realigning models is complex and expensive.

  • Models tend to become dated quickly and need to be updated continuously.

To make the most out of AI for long-term business value, lenders should –

  • Build reliable, rigorously trained models

  • Identify where it fits with their business priorities

  • Preempt regulatory oversight and take measures like building auditable ML models

  • Wed AI with existing capabilities for customized, hybrid usage.