Generative AI is predicted to supercharge online scams and impersonation assaults in 2026, pushing fraud forward of ransomware as the highest cyber-risk for companies and shoppers alike, in response to a brand new warning from the World Financial Discussion board.
Almost three-quarters (73%) of CEOs surveyed by the WEF mentioned they or somebody of their skilled or private community had been affected by cyber-enabled fraud in 2025. That shift has moved executives’ issues away from ransomware, which dominated company risk lists only a 12 months in the past, and towards AI-driven scams which can be simpler to launch and tougher to detect.
“The problem for leaders is not simply understanding the risk however appearing collectively to remain forward of it,” Jeremy Jurgens, managing director on the WEF, mentioned. “Constructing significant cyber resilience would require coordinated motion throughout governments, companies and know-how suppliers to guard belief and stability in an more and more AI-driven world.”
Do not miss any of our unbiased tech content material and lab-based evaluations. Add CNET as a most well-liked Google supply.
Customers are feeling the influence as properly. A recent Experian report discovered 68% of individuals now see id theft as their prime concern — forward of stolen bank card knowledge. And that nervousness is backed up by federal knowledge. The US Federal Commerce Fee reported $12.5 billion in client fraud losses in 2024, a 25% year-over-year improve.
Consultants say generative AI helps gasoline that development by making scams simpler to create and extra convincing. The WEF report discovered 62% of executives had encountered phishing makes an attempt, together with voice- and text-based scams, whereas 37% reported bill or cost fraud. Almost a 3rd (32%) mentioned they’d seen id theft circumstances, too.
Elevated use of AI instruments is decreasing the boundaries for cybercriminals whereas elevating the sophistication of assaults. Scammers can now shortly localize messages, clone voices and launch sensible impersonation makes an attempt which can be tougher for victims to identify. The WEF additionally warns that generative AI is amplifying digital security dangers for teams like youngsters and ladies, who’re more and more focused by means of impersonation and synthetic image abuse.
On the similar time, many companies and organizations lack employees and experience to defend in opposition to cyberthreats. Whereas AI might assist, the report cautions that poorly carried out instruments can introduce new dangers.
It isn’t simply companies going through extra threats. In its Might 2025 Scamplified report, the Client Federation of America warned that instruments that generate extremely customized phishing emails, deepfake voices and realistic-looking alerts are stripping away lots of the conventional crimson flags we as soon as relied on to identify a rip-off.
Learn extra: Meet the AI Fraud Fighters: A Deepfake Granny, Digital Bots and a YouTube Star
For shoppers, the recommendation on how one can finest safeguard your privateness is easy however more and more vital.
The CFA urged shoppers to decelerate and query surprising calls, texts or emails that create a way of urgency or stress to behave shortly. It suggested in opposition to sharing private, monetary or authentication info in response to unsolicited outreach and really useful independently verifying requests by wanting up official telephone numbers or web sites fairly than trusting caller ID, hyperlinks or contact particulars offered in a message. You must also think about reporting suspected scams to authorities, such because the Federal Commerce Fee’s ReportFraud.ftc.gov website.
Typically, specialists proceed to advocate staying alert for suspicious messages, using strong, unique passwords, enabling multifactor authentication and maintaining with fundamental on-line safety measures as AI-driven scams evolve in 2026 and past.
