Long Read  

AI a double-edged sword when fighting fraud

AI a double-edged sword when fighting fraud
While GenAI can be used to perpetrate malicious activity, it can also be used to help mitigate cyber threats, including recognising and thwarting deepfakes. (mstandret/Envato Elements)

Every year, billions of pounds are lost as a result of fraud.

Artificial intelligence and machine learning models have long been used in fraud management for identifying patterns, irregularities and suspicious language, as well as augmenting and making heuristic rules-based scoring faster and more accurate.

Generative AI boosts these efforts and can help improve data ingestion, risk scoring policy management and accuracy, and investigation.

Article continues after advert

More specifically, GenAI-based co-pilots can come up with model improvements, make recommendations on how to investigate fraud management alerts and cases, provide insights into reports, and help interpret and write reports on suspicious activities. 

However, GenAI is not without risks. Malicious actors are able to use its image, text, and voice generation capabilities to falsify documents, impersonate customers, and take over accounts on a large scale, without the need for manual labour. 

Deepfakes present new fraud management obstacles

One of the biggest risks includes fraudsters having the ability to use GenAI to create deepfakes and simulated legitimate-looking online sessions.

For example, GenAI technology can be used to generate images and imitate voice over the phone to enrol in services, such as opening bank accounts or signing up for an insurance policy.

Furthermore, GenAI can be used to authenticate biometrics-protected websites and mobile applications, such as call centre voice biometrics or online facial recognition in mobile applications or in web browsers. 

Using deepfake technology to clone and impersonate an individual will undoubtedly lead to fraudulent financial transactions that will impact not only individuals but also enterprises on a wider scale.

Indeed, we are seeing more and more examples of malicious actors targeting larger companies by using GenAI to impersonate a senior executive and authorise activities, including the sharing of sensitive data or wire transfers to criminals.

Challenges of risk scoring and deepfake identification

While GenAI can be used to perpetrate malicious activity, it can also be used to help mitigate cyber threats, including recognising and thwarting deepfakes.

However, organisations that use defensive GenAI for risk scoring and deepfake identification are also exposed to many challenges and issues. 

First, there is the risk of leaking intellectual property and encountering copyright violations. For example, data scientists pasting sensitive corporate data into GenAI tools that do not have adequate security measures in place.

For individuals, this can also lead to privacy violations, as there is a chance personally identifiable information (PII) could be filtered through or leaked into GenAI tools.

Second, it can also be a challenge to explain decisions GenAI helps make, resulting in it becoming almost indefensible to regulators, customers and pundits should the need arise.

Third, ensuring consistency and repeatability of GenAI outputs and decisions is also challenging. As with all AI tools – particularly in their infancy – having multiple subsequent queries producing the same results is not always guaranteed.