Over the past few years, artificial intelligence has changed the landscape for businesses – enabling them to streamline operations, enhance customer service and improve decision making. However, for every benefit, there is a risk and the same tools driving innovation are also behind a new wave of increasingly sophisticated fraud.
Threats include deepfake video calls, AI generated synthetic identities and precisely tailored and targeted attacks. Criminals are reorienting and pivoting their operations faster and more convincingly than ever. Business owners and their IT teams need to understand these threats, and the evolving regulatory and defensive landscape, to protect their finances, data, customers and reputation.
So what types of AI-assisted fraud might businesses encounter?
1. Deepfakes
There’s been a big increase in this kind of fraud. The National Cyber Security Centre (NCSC) says that over the past 18 months, deepfake content used in fraud has increased fourfold. One of the main reasons for this is the increasing accessibility and affordability of AI tools.
An incident from 2024 shows just how easy it seems to be to deceive people using this technology. The UK office of Arup, a multinational engineering business, was tricked into sending £20 million to fraudsters after an employee joined a video call featuring deepfaked executives.
This case highlights a threat that simply wasn’t there a few years ago. Businesses who either don’t know about this kind of threat or don’t realise how quickly AI is redefining the nature of attacks are destined to be caught out by them. If they’re lucky, or cyber-savvy, businesses can spot the signs and react before it’s too late, as with this case, but they certainly shouldn’t rely on good fortune to save them.
2. Voice cloning: the new business email compromise
It only takes a few seconds of audio – often obtained through a seemingly innocuous phone call – to enable fraudsters to make a voice clone. Armed with that, they can wreak havoc on unsuspecting businesses, particularly if safeguards haven’t been put in place. Voice cloning can be used to initiate urgent fund transfers, obtain confidential information or bypass verification processes. This new phenomenon has been branded BEC (Business Email Compromise) 3.0. AI powered BEC attacks now make use of personalised, context aware messages and deception via a range of channels.
3. Synthetic identities and AI boosted document fraud
We spend so much of our time online now that some of the people with whom we interact, and whom we think we know, we may never actually meet. But what’s the difference between a real person you only interact with online and a very realistic and convincing fake identity that blends real and fabricated data? The more effective deceptions are, after all, those rooted in truth.
An analysis carried out by Experian shows that synthetic identity fraud is a particularly alarming and fast-growing threats. Nearly six out of ten UK businesses have taken steps to address their vulnerability to this kind of attack.
Tactics used by cyber attackers include AI generated ID documents (some of which are extremely convincing, based as they are on the real thing) and deepfake facial images that can be used to get around onboarding checks.
4. SIM swap and 2FA interception
SIM swap attacks surged by over 1,000% year on year, with nearly 3,000 cases logged in the National Fraud Database.
What is SIM-swap fraud?
This happens when mobile carriers are tricked into transferring a victim’s phone number to a SIM card controlled by fraudsters, allowing them to intercept SMS-based 2FA codes. This bypasses security on bank accounts, emails and social media.
How it works:
- Social Engineering: Attackers pose as the victim, often claiming they’ve lost their phone, to try and convince support staff to transfer the number to a new SIM.
- 2FA Interception: Once this has happened, the attackers will receive all calls and text messages (SMS) in real-time.
- Account Takeover: Using 2FA codes, attackers can quickly reset passwords, access sensitive accounts and drain financial or cryptocurrency accounts. The first intimation the victim has that something is wrong is when they are locked out of their accounts. It only gets worse after that.
How will AI affect SIM-swapping?
Before the advent of AI, SIM-swapping was a problem but the limitations of human capacity meant there was only so much one person could do. Now, AI automation has made the process more efficient and scalable:
- AI-driven tools are used to scrape personal data from social media, phishing and dark web breaches to build profiles for impersonating victims.
- AI automates the creation of realistic, phishing websites that harvest the necessary credentials (PINs, account numbers) to convince carrier employees to perform the swap.
- Criminals are increasingly using APIs and web applications to execute SIM swaps automatically, bypassing the need for manual, time-consuming phone calls.
- The rise of “eSIM swap” allows attackers to register a victim’s number to an eSIM on a device they control, often without needing to wait for a physical SIM card
5. AI enhanced phishing and scam campaigns
The NCSC warns that AI is making phishing more effective and more frequent, with criminals using automation to refine targeting and remove errors typical of traditional scams.
In days gone by, scams were easy to spot; errors in spelling and punctuation were obvious signs, and digging even a little deeper was enough to highlight their lack of legitimacy. Now, however, AI is being used to eliminate those tells, and make it very hard to spot a fraudulent approach. When tied to the lure of easy gain and spurious credibility, some people take what they are seeing at face value and the consequences can be costly indeed. In the first half of 2025 alone, reports suggest that about £100 million was lost to deepfake driven investment scams, which often use fake footage of trusted individuals to convince victims, as in this example.
Which businesses are most at risk?
No-one is safe. AI assisted fraud affects all sectors; 35% of UK businesses reported being targeted by AI related fraud in early 2025, up from 23% the previous year. However, retailers who operate purely digitally, retail banks and telecoms companies are right up there at the top of the list when it comes to sectors most likely to be targeted, and the retail sector is also exposed.
The regulatory and policy landscape
The government and associated agencies have responded to the rise of AI-assisted fraud with a series of frameworks, guidelines and standards aimed at helping businesses secure their systems and reduce fraud risks.
1. UK AI Cyber Security Code of Practice (2025)
Published by the Department for Science, Innovation & Technology (DSIT), this details baseline cybersecurity requirements for AI systems. It covers secure design, development, deployment, maintenance and end of life.
2. NCSC guidelines for secure AI system development
These set out actionable steps for secure design, secure deployment and operation of AI, including threat modelling, supply chain security and incident response planning.
3. Government AI security framework
The Government Security Group outlines best practices for secure AI use across public sector bodies, emphasising ‘secure by design’ principles and cross government security collaboration.
4. Implementation guide for the AI cyber security code
This provides real world implementation scenarios to help organisations apply the Code effectively across the AI supply chain.
Taken together, these form a robust foundation for organisations who want to protect their systems and mitigate fraud.
What can you do now?
1. Strengthen verification and financial controls
- Enforce multi person approvals for high value payments.
- Require secondary verification channels – phone calls to verified internal numbers, not numbers provided in an email or call.
- Avoid relying on voice or video alone for authentication.
2. Deploy AI driven fraud detection
Fraud defence must also make use of AI; more than 52% of UK businesses plan to improve AI analytics to combat AI fraud.
Areas to concentrate on include:
- Behavioural biometrics
- Transactional forensics (real time monitoring)
- Pattern analysis for synthetic identities
3. Harden identity and access management
- Implement Multi Factor Authentication, but avoid SMS-based methods, which – as we’ve seen above – are vulnerable to SIM swap.
- Use authentication methods resistant to deepfake exploitation (e.g., device bound tokens, cryptographic keys).
- Apply zero trust principles across systems. Zero Trust is a cybersecurity strategy based on the mantra “never trust, always verify,” assuming all network traffic is hostile, regardless of origin. It operates on three core principles: explicitly verifying users/devices based on all available data, using least-privilege access to limit exposure and assuming breach to minimise damage
4. Use realistic AI generated scenarios to train employees
NCSC and cyber security firms encourage organisations to simulate AI powered phishing, vishing and deepfake scenarios as part of staff training. This prepares teams to distrust “what they see and hear” and rely on process instead of instinct.
5. Adopt secure by design AI practices
NCSC and DSIT guidelines place a strong emphasis on:
- Threat modelling specific to AI components
- Data poisoning protection
- Supply chain security
- Secure deployment and monitoring of AI models
If your business is deploying AI, you need to treat it as a high risk asset that will need continuous security oversight, rather than a “fit and forget” approach.
6. Prepare an incident response plan for deepfake enabled fraud
Traditional cyber breach playbooks are effectively obsolete unless they’re constantly updated and revised. It’s likely that the one you’re using is insufficient for purpose. For AI powered fraud, you’ll need to include:
- Rapid financial freeze protocols
- Communication plans for when executive identities are compromised
- Law enforcement escalation pathways (e.g., Action Fraud)
This aligns with lessons learned from the Arup deepfake case, which we mentioned above.