AI’s potential to drive efficiency and improvement across a variety of industries is undeniable. But, as with most technologies, there is a flip side to this story: AI can also be used to facilitate wrongdoing by providing perpetrators with the tools to commit crime, particularly fraud, in a more scalable and sophisticated way.
The Rise of AI in Fraud
Criminals are increasingly using AI and generative AI tools to scale their attacks, making fraud attempts more targeted, convincing and effective.
The financial sector has been particularly affected, with cryptocurrency seeing the fastest increase in fraud activity (growing on average 24% each year since 2020[1]) following the trajectory of the increasing availability of generative AI technologies. Overall, fraud cost the UK banking industry £1.17 billion in 2024[2], and statistics for 2025 only show that increasing.
Outside of the financial sector, identity fraud – which appears to be particularly buoyed by the use of AI - affects a wide range of other professional services such as law, consulting and accounting, which are particularly attractive targets due to the combination of client data and high-value transactions. The telecommunications, recruitment, and gambling and gaming industries, where remote onboarding and digital verification are commonplace, are also seeing an increase in the level of AI-based fraudulent activities.
Some examples of AI tools used by fraudsters include:
Deepfake video scams and voice messages
These are artificially generated videos and voice messages used to impersonate a trusted person’s identity resulting in unwary individuals engaging with malicious content (for example, videos intended to dupe the recipient into transferring monies to fraudulent accounts, or investing in a product apparently promoted by a popular celebrity only to find the product does not exist).
The financial sector has been hit hard by these scams. In a trend that we can expect to continue, a 2024 survey[3] revealed that over half of accounts payable teams and finance professionals in the US and the UK had been targeted by attempted deepfake scamming attacks, with 43% of those surveyed admitting falling victim to these.
Synthetic identities
Fraudsters create fake identities by combining real data (for example, stolen National Insurance numbers) and AI-generated information. These false personas are then used to commit fraud, for example by taking on credit with no intention to repay. Due to a sophisticated blend of real and GenAI data, fraudsters are able to manipulate creditworthiness through falsified documents.
Risk is not just limited to the financial industry. In the recruitment sector, fake candidates are on the rise too. These synthetic identities and deepfakes, created with ever more sophisticated AI tools, apply for positions while infiltrating companies to steal data, plant malware or steal funds. Cases of false applications increased by 10% in 2024, with over 21,000 cases having been recorded[4]. Jobseekers are also being targeted, with scammers advertising fictitious roles to steal jobseekers’ personal data and money (for example, by requesting visa or training-related payments).
AI-enabled chatbots
Sophisticated chatbots are being used to communicate with potential scam victims as part of the process to manipulate them into making payments, in a variation to the well-known push-payment scheme. These tools have the potential to scale-up scams due to their ability to multiply the contact with multiple victims in a short space of time without the need for any human involvement.
Sophisticated phishing and smishing
Large language models can be used to write phishing communications which closely resemble the tone and style of trusted brands, authorities, or line-managers. These scams are difficult to identify due to their high degree of sophistication and lack of grammatical errors, typos or clumsy language which have historically been relied on to spot a likely fraud.
Use of AI to prevent fraud
At the same time, AI has also become an important tool in preventing and detecting fraudulent activities which, for reasons similar to the efficacy of AI use in fraud, can be far more powerful than traditional methods of fraud detection. AI tools fed with large amounts of data can identify anomalies with a greater level of accuracy and through automation, and AI systems can monitor in real time a higher number of transactions than individuals. AI systems also continue to learn and evolve as new data is supplied, thus adapting to new fraud techniques as they arise.
In banking, for example, machine learning models utilise historical data to identify and block suspicious transactions. Predictive analytics can also be used to estimate the types of future transactions a person might make, flagging any high-risk or unexpected activities for further investigation.[5]
Technology and telecommunication companies are also making extensive use of machine learning models as part of network filtering (for example, to block spam messages), to identify and remove malicious content, and to flag potential fraud and intervene in real time[6].
Conclusion
As the fraud threat grows by leveraging the use of ever-evolving AI, so does the opportunity to combat fraud using the very same technologies.
With fraudsters employing increasingly sophisticated tactics at a much larger scale, using AI for fraud prevention is no longer just an option but a necessity. Readers will be aware from our last article here of the statutory requirement for corporates to prevent fraud, and the importance of having ‘reasonable procedures’ in place to do so. The English Court also possesses powerful remedies to seek redress from fraudsters, including the ability to freeze assets and bank accounts, and trace funds that have been misappropriated.
Fladgate regularly acts for clients who have been the victims of fraud. Our data protection and cyber security team also have extensive experience in helping clients understand complex data regulatory issues to minimise legal risks and ensure compliance. If you would like to discuss the contents of this article in more detail, please do get in touch with its co-authors, Leigh Callaway, Partner (lcallaway@fladgate.com), and Maria Macias Perez, Associate (mmaciasperez@fladgate.com).
[1] Entrust Identity Fraud Report 2026
[2] UK Finance Annual Fraud Report 2025
[3] What is Deepfake fraud in accounts payable, and how can you prevent it?
[4] Fraudscape 2025: Reported fraud hits record levels.
[5] As part of this trend, Starling Bank recently launched ‘Scam Intelligence’ – an AI tool built into the customer-facing banking app to protect customers from scams. This feature helps customers identify red flags in online purchases, by enabling them to upload images of items, advertisements and communications with online sellers The AI tool will then inform customers of any identified risks, such as whether the listing image is likely to be a deepfake or the bank account details do not match those of the seller. Customers can then decide whether to continue with the purchase or not, whilst educating themselves on the warning signs to look for in future.
[6]
Vodafone has launched its ‘Vi Protect’ offering, an AI-based voice spam detection which identifies and flags fraudulent and spam calls in real time. By using advanced AI models, web crawlers, and user feedback, the tool detects suspicious calls before they even reach customers, alerting them in real time so they can decide whether to answer these. Vodafone has stated this system has successfully flagged over 600 million spam and scam calls and messages, helping to protect customers from fraud and data theft.



