AI Is the New Social Engineer: LLMs, Voice Cloning, and the 442% Vishing Surge
How AI-powered voice cloning, malicious LLMs, and multi-channel attacks are rewriting the social engineering playbook.
The most dangerous adversary your organization will face this year doesn’t write malware. They write emails. They make phone calls. And increasingly, they’re letting AI do both.
Social engineering has overtaken malware as the dominant initial access vector. That’s not speculation. It’s in the data. CrowdStrike’s 2025 Global Threat Report found that 79% of initial access attacks last year were entirely malware-free. No payloads. No exploits. Just stolen credentials, impersonation, and psychological manipulation. And the threat actors pulling this off are getting a significant upgrade from artificial intelligence.
This is the first in a series of threat research briefings from Dark Dossier. We’re starting where the attack surface starts: with people.
The Numbers That Should Concern You
Voice phishing (also known as vishing) exploded by 442% from the first half to the second half of 2024. That’s not a typo. CrowdStrike’s Adam Meyers put it bluntly: adversaries are finding new ways to gain access because modern endpoint security tools have forced them to work differently. Bringing malware into a modern enterprise, he said, is like trying to bring a water bottle through airport security. You’re probably going to get caught.
So they stopped trying.
Instead, they picked up the phone. They wrote better emails. And they started using AI to do it faster, cheaper, and at a scale that wasn’t previously possible.
Here’s the current landscape by the numbers:
$2.77 billion in BEC losses reported to the FBI in 2024 alone, across 21,000+ complaints.
$25 billion in estimated annual losses from voice-based fraud globally (Truecaller).
$200 million+ in deepfake-enabled fraud losses in Q1 2025 alone.
37% of organizations worldwide have already been hit by a voice deepfake scam (Microsoft).
60% of recipients fall for AI-generated phishing emails, matching the success rate of human-crafted lures.
1,265% increase in AI-driven phishing attacks (SentinelOne).
The trend line is clear. Social engineering isn’t just persistent. It’s accelerating, and AI is the catalyst.
How Attackers Are Weaponizing LLMs
Let’s start with phishing, because that’s where the LLM impact is most measurable.
The old tells are gone. Grammatical errors, awkward phrasing, clumsy formatting. The things we trained employees to spot for years are artifacts of a pre-AI era. LLM-generated phishing emails are grammatically sound, contextually relevant, and linguistically natural. Academic research has confirmed that conventional detection systems fail when faced with emails crafted by large language models.
Threat actors aren’t just jailbreaking ChatGPT. They’ve built purpose-designed tools:
WormGPT 4 resurfaced as a commercial subscription service in late 2025, advertised on Telegram and underground forums. Built on open-source LLM foundations and fine-tuned on malware code, exploit write-ups, and phishing templates, it generates BEC lures and phishing content with zero ethical guardrails. Palo Alto’s Unit 42 documented its tiered pricing model and explicitly offensive capabilities.
KawaiiGPT, identified in mid-2025 and now at version 2.5, took a different approach: free, open-source, available on GitHub, and deployable in under five minutes on a standard Linux box. It represents the democratization of AI-assisted social engineering. The barrier to entry has effectively collapsed.
The operational impact is significant. Researchers estimate attackers save 95% on phishing campaign costs using LLMs. A single prompt can generate a functional credential-harvesting email and fake login page in roughly 20 seconds. And because LLMs can produce hundreds of slightly unique message variants from the same base template, they defeat signature-based email filtering through sheer polymorphic volume.
Mimecast’s threat intelligence team analyzed email traffic from January 2022 through March 2025 and found a clear inflection point in AI-generated malicious emails correlating directly with ChatGPT’s public release. Their research also identified linguistic fingerprints: AI-generated phishing emails disproportionately use phrases like “I hope this message finds you well” and “Thank you for your time.” Those patterns are useful for detection today but will likely fade as adversaries refine their prompting.
But email is only half the story.
Voice Cloning: The Game Changer
If LLM-generated phishing is an evolution, AI voice cloning is a revolution.
The technology has reached a threshold where a convincing voice replica can be generated from as little as 30 seconds of audio. Tools like ElevenLabs, Resemble.AI, and various open-source frameworks don’t just match tone. They replicate inflection, cadence, accent, and in some cases, emotional state. Combined with caller ID spoofing, the result is a phone call that passes every human credibility check.
The attack pattern follows a consistent framework. Adversaries first gather voice samples from publicly available sources. Think podcasts, conference talks, earnings calls, webinars, YouTube, and social media. A McAfee survey found that more than half of all adults share voice data online at least weekly. Executives are particularly exposed through media appearances and corporate communications.
The reconnaissance feeds into the social engineering chain: clone the voice, spoof the number, build the pretext, execute the call. The most common scenarios in 2025 include executive impersonation for wire transfers, IT support impersonation for credential harvesting, and vendor impersonation for payment redirection.
And the results have been devastating.
In early 2024, attackers used deepfake video and audio to impersonate a CFO during a live video conference at engineering firm Arup, resulting in a $25.6 million fraudulent transfer. In early 2025, fraudsters cloned the voice of Italian Defense Minister Guido Crosetto and called prominent business leaders with a fabricated ransom scenario. At least one victim transferred nearly a million euros before authorities froze the funds. A Tier-One bank reported to Reality Defender that its call centers were suddenly overwhelmed by deepfake voice calls attempting unauthorized account access.
The emerging frontier is even more concerning. Researchers and threat actors are now experimenting with live AI rebuttals, meaning systems capable of answering verification questions in real-time during a call. When that capability matures and merges with deepfake video, we’re looking at real-time synthetic impersonation on video conferences that no human can reliably detect.
Scattered Spider: Social Engineering as Core Tradecraft
No discussion of modern social engineering is complete without Scattered Spider.
Tracked as UNC3944, Octo Tempest, and Storm-0875, this group of primarily native English-speaking young adults in the US and UK has made social engineering their primary weapon. Their linguistic and cultural fluency is their decisive advantage. They sound like the employees they’re impersonating because, demographically, they are indistinguishable from them.
Their signature technique remains help desk vishing: calling corporate IT service desks, impersonating employees, and requesting password resets or MFA re-enrollment. They back this up with extensive OSINT from LinkedIn and social media to pass identity verification questions. Once they have initial access, they deploy a mix of legitimate remote access tools, SIM swapping, MFA fatigue attacks, and increasingly, ransomware (most recently DragonForce).
CISA updated their Scattered Spider advisory in July 2025 with new TTPs, noting more sophisticated social engineering techniques and new malware variants. The group’s phishing infrastructure has also evolved. They’ve shifted from easily-detected hyphenated domains (SSO-company[.]com) to subdomain-based patterns (SSO.company[.]com) to evade automated domain impersonation detection.
Perhaps most significantly, Obsidian Security published research in November 2025 concluding that Scattered Spider and ShinyHunters (UNC6040) have likely merged or formed a partnership. ShinyHunters brought Salesforce-focused data exfiltration expertise. Their operators vish employees into authorizing malicious connected apps disguised as Salesforce Data Loader. Scattered Spider contributed the help desk social engineering playbook. The combined capability set is formidable.
Arrests have slowed but not stopped them. Four members were arrested in the UK in July 2025, and a 19-year-old was charged in the US in September for 120 network intrusions. But as one Mandiant researcher noted, the arrests have “spooked” other members without eliminating the threat. The Com, the loose online criminal network these actors operate within, continues to produce operators using the same TTPs.
The QR Code Blind Spot
While vishing gets the headlines, quishing (QR code-based phishing) has quietly become one of the most effective MFA bypass techniques available.
QR code attacks surged 400% between 2023 and 2025. The technique exploits a fundamental gap in enterprise security architecture: when an employee scans a QR code with their personal phone, the entire interaction moves off the corporate network. EDR doesn’t see it. Email security doesn’t see it. The SOC is blind.
The attack flow is elegant in its simplicity. An email arrives with a QR code instead of a link. Because it appears as an image, it passes URL scanning and link reputation checks. The employee scans it with their phone, gets redirected through attacker-controlled infrastructure that fingerprints their device, and lands on a mobile-optimized phishing page mimicking Microsoft 365, Okta, or a VPN portal. Advanced variants use Adversary-in-the-Middle proxying to capture not just credentials but the session cookie, completely bypassing MFA.
In January 2026, the FBI issued an alert confirming that North Korea’s Kimsuky group has adopted quishing in spear-phishing campaigns against think tanks, academic institutions, and government entities. The FBI’s language was notable. They characterized quishing as a “high-confidence, MFA-resilient identity intrusion vector in enterprise environments.” When the FBI calls a technique MFA-resilient, it’s time to take notice.
The only authentication method that resists AiTM-style quishing attacks is FIDO2/WebAuthn hardware security keys, because they’re cryptographically bound to the legitimate site’s domain. Everything else (SMS codes, TOTP apps, push notifications) can be intercepted through session token capture.
What This Means
We’re in a new phase. The social engineering threat has undergone a qualitative shift driven by AI tooling, and the defensive playbooks that worked five years ago are increasingly insufficient. Employees can’t spot AI-generated phishing by looking for typos. They can’t trust a familiar voice on the phone. They can’t assume a QR code from “IT” is legitimate.
For defenders, the priorities are clear: phishing-resistant MFA everywhere, zero trust architecture that can revoke stolen session tokens instantly, rigorous identity verification at help desks (video verification, not just knowledge-based questions), and AI-driven anomaly detection that analyzes communication patterns rather than scanning for known-bad indicators.
For red teams, the testing opportunities have never been richer. If you’re not incorporating vishing, AI-enhanced phishing, and multi-channel social engineering into your engagements, you’re not testing against the real threat landscape.
The adversary has upgraded. Time to upgrade the defense.
