“A Finance worker pays out $25 million after video calls with a deep fake CFO” read the CNN report on February 4, 2024. A Hong Kong-based firm sent US$25 million to a fraudster after an employee was instructed to do so by the Chief Financial Officer through a video call. Interestingly, other colleagues of the employee were also seen in the video. As it turned out, the CFO was not on the call with any of the employees. A fraudster had created a deepfake to trick the employee into sending the money.
Fraudsters finding ways to deploy sophisticated Generative AI and deep fake videos to cheat both banks and their customers is no longer a surprise. Deloitte’s Centre for Financial Services predicts that Generative AI-enabled fraud may reach USD 40 billion in the US by 2027, at a CAGR of 32% from 2023.
Such incidents may have a much higher likelihood in countries like India, which has witnessed rapid adoption of fintech among consumers across both retail and corporate banking. Easy access to new generative AI tools that can cheaply create deep fake videos, fictitious accents and voices, and fabricated documents is a constant concern for regulators and the fintech industry. An entire ecosystem exists on the dark web that sells such software for as little as USD 20 to thousands of dollars. This quickly renders many anti-fraud tools ineffective.
The ‘self-learning’ capability of generative AI and its ability to adapt quickly further add to the challenges of computer-based detection systems. By using generative AI, fraudsters can now scale up cybercrimes by targeting multiple victims simultaneously with the same or even fewer resources.
FBI’s Internet Crime Complaint Center data, which tracks 26 categories of cybercrimes, counted around 21,831 instances of business email fraud in 2023, with losses of approximately USD 2.7 billion. The Deloitte Centre for Financial Services estimates that, in an “aggressive” adoption scenario, Generative AI email fraud losses could total around USD 11.5 billion by 2027.
Financial institutions and the fintech industry are already using AI and ML to detect, alert, and respond to cyber threats. For instance, some banks are automating processes that diagnose fraud and send investigations to the appropriate team within the bank. Some banks are using LLMs (large language models) to identify potential fraud. Similarly, another bank uses Decision Intelligence tools that scan a trillion data points to predict whether a transaction is genuine.
The ability to counter AI-enabled fraud will be the most important competitive edge for any fintech player in the future. Coupling modern technology with human intuition to preempt cyberattacks is an important step as well. No single institution can fight financial fraud alone. Fintech players are collaborating with knowledgeable and trustworthy partners, technology providers, researchers, and academia to build strategies and solutions to stay ahead of the Generative AI fraudsters.
Customers themselves have an important role to play in preventing financial losses. They must follow the guidelines prescribed to them and remain vigilant. Consumers should pay attention to push notifications, app notifications, alerts, and nudges at the point of transactions to minimise the risk of financial fraud. Spot, stop, and report immediately all instances of financial fraud to your peers, groups, and community to eradicate the malice of financial fraud.
Source: Deloitte
We're Live on WhatsApp! Join our channel for market insights & updates