Opinion  

'Firms would be wise not to rely on AI for decision-making'

David Wylie

David Wylie

Glance through the marketing material of software platforms and it will not be long before you will notice a new emphasis on the use of artificial intelligence.

Some technology providers present it as the new frontier for lenders looking to further optimise their decision-making. What better way to go than to delegate the whole process to a hyper-sophisticated algorithm?

The marketing pitch is seductive, but it bears some investigation.

Article continues after advert

If you drill down into the products that such software companies are promoting, you often discover that they do not really amount to what most would define as AI.

The AI moniker has been co-opted to make something appear more impressive than it actually is. 

Perhaps this is just as well, because the technology is not something regulators are wholeheartedly backing. In the UK, US and Australia they have expressed misgivings about AI’s use when generating lending decisions. 

Their fear is that, if done prematurely, it may actually not improve decision-making at all. 

The US’s Consumer Financial Protection Bureau has cautioned lenders and intermediaries that ‘agency’ cannot be attributed to AI systems, given that this risks removing accountability for decision-making away from firms.

Companies are not absolved of their legal responsibilities when they let a black-box model make lending decisions, it cautions.

The law gives every applicant the right to a specific explanation if their application for credit is denied, and that right is not diminished simply because a company uses a complex algorithm that it doesn’t understand. 

The bottom line is that complex algorithms must provide specific and accurate explanations for denying applications, it says. 

Reading between the lines, the implication is that many AI platforms may not do this and therefore raise the possibility of future liability claims. 

In the UK, this issue was identified in a recent Bank of England/Financial Conduct Authority report, which suggested "lack of AI explainability" posed a potential reputational and regulatory hazard.

The implicit question again posed is: would a company be able to justify its decision when facing a mis-selling claim? 

It is not that the AI necessarily would make the wrong decision, it may well be the right decision, it is whether the lender is able to demonstrate to a client how the decision was arrived at in the first place. 

That the decision is comprehensively evidenced is particularly important because AI is already known to be prone to what is referred to as AI bias, AI model risk, or, in everyday parlance, the law of unintended consequences.

Model bias occurs during the AI training process and can bake-in certain outcomes. Automated model-selection tools can exacerbate risk, as can incomplete datasets. 

For example, the historical gender data gap that has given us more male-oriented data than female could well lead to skewed lender decisioning based on sex.