Opinion  

'AI agents in financial services are coming. They will pose a challenge'

Craig Le Clair

Craig Le Clair

AI agents are systems trained to act invisibly on behalf of an enterprise or individual, performing tasks, making decisions and interacting with data or other systems autonomously.

This technology builds on and transforms earlier iterations of agents – such as chatbots or robotic process automation digital workers – through enhanced context, learning and advanced analytics.

AI agents will alter how work gets done in information-heavy industries like financial services that rely extensively on automation.

Article continues after advert

AI agents have potential across all areas of financial services, but internal operations are today’s best target. 

Data entry and review workflows, like customer on-boarding, dispute resolution, fraud inquiries and loan and credit applications, should be top targets.

We ranked 11 AI agents in terms of near, middle, and longer-term adoption based on deployment difficulty, risk and business value.

The front office faces more risk and deployment challenges, despite obvious potential. According to one recent study of more than 5,000 chat agents, access to generative AI tools increased their productivity (in terms of resolutions per hour) by 14 per cent. 

Yet significant issues make customer-facing applications of genAI too risky in financial services. There just are not enough controls in place to prevent harm to customers or a company's reputation. 

Outside the contact centre, prospecting bots have similar risks.

Large datasets of financial news and social media interactions can find individuals with specific investment needs.

Once identified, an AI agent can craft personalised emails, schedule follow-up calls and track engagement metrics. Yet qualification bots can be annoying. 

Internal operations is the place to start. To support advisers in wealth management, an AI agent can pull exhibits from various systems to prepare a summary for client presentation.

"Know your customer" (KYC) is now a continuous endeavour. Today 80 to 90 per cent of surveyed financial institutions have a staggering 1,000 to 2,500 employees dedicated to KYC tasks.

AI Agents can assist with lower-risk cases. Anti-money laundering alerts are an example. An AI agent coworker can take over basic screening responsibilities. 

From an AML alert, an agent can prepare a dossier of public and private data and render an opinion eliminating a high percentage of false positives. 

AI will also take on a work orchestration role. Connections from enterprise data, stored standard operating procedure documents and process metadata will be made to define the next best work step.

Today, work patterns are predefined by rules and routing logic embedded in workflow applications or custom automation. But AI will create them on the fly.

How should we build one?

The best platform will depend on the AI agent. For example, some platforms focus on communicating with humans, such as a virtual customer service agent.

Google’s conversational AI platform or a pure play like KoreAI or Ameilia may be best. 

Other platforms, like Microsoft’s Copilot Studio, are better for simple tasks to augment an employee, like a post-call automation agent to help an adviser organise a follow up meeting.