Opinion  

'Firms would be wise not to rely on AI for decision-making'

David Wylie

David Wylie

Furthermore, the monitoring of this sort of tendency within AI is introduced ‘post-implementation’, where ‘hyper-tuned’ models can be highly susceptible to data drift and lack of meaningful oversight.

These biases would be next to impossible to defend if they were found to underpin poor decision-making.

Article continues after advert

Should their existence be established, for example via a mis-selling tribunal, they would make any lender vulnerable to additional actions, where all those that feel similarly impacted would have a right to redress.

For these and other reasons the FCA has intimated that it may not be possible to manage AI within the existing regulatory framework.

Fine-tuning what we have already may not be enough, it suggests, so a new approach could be needed. 

It seems to be making it clear that, while lenders might like the idea of AI, they should be very careful to ensure they do not lose the ability to explain to the regulator and the customer why precisely they were (or were not) granted credit. Trying to reverse-engineer an AI algorithm-based decision in front of a tribunal will not cut it. 

Given the level of uncertainty surrounding the use of AI, I certainly think caution should be exercised. It may be wise to wait for more visibility around the level of risk posed by the technology, and further clarity from the regulator. 

Lending that is not underpinned by rigorous, documentable decision-making has always been unwise. The finance industry has had to learn that lesson the hard way. 

It is undoubtedly one we should not forget. 

David Wylie is commercial director of LendingMetrics