AI

AI has “fully defeated” most of the ways people authenticate

In a recent interview at the Federal Reserve, OpenAI CEO Sam Altman warned of “a significant impending fraud crisis” driven by AI’s ability to defeat voiceprints and video.

In a recent interview at the Federal Reserve Large Banks Conference, OpenAI CEO Sam Altman warned of “a significant impending fraud crisis” driven by AI’s ability to defeat biometric authentication.

Speaking with Michelle W. Bowman, Vice Chair for Supervision on the Fed’s Board of Governors, Altman put it bluntly:

AI has fully defeated the ways that most people authenticate currently, other than passwords.

Voiceprints were a “crazy thing to still be doing,” he warned, saying that even exotic forms of authentication involving video were already obsolete. Altman explained that the technology to bypass these systems already exists, and that while OpenAI and its peers hadn’t released it, “some bad actor will,” adding that “this is not a super-difficult thing to do.”

Altman’s warning to the financial services industry was the latest warning light on a dashboard already crowded with alerts about the effects of AI on cybercrime. In November 2024, the US Treasury’s Financial Crimes Enforcement Network (FinCEN) issued an alert about the increase in fraud schemes involving deepfake media targeting financial institutions. It reported that criminals were already using generative AI to “lower the cost, time, and resources needed to exploit financial institutions’ identity verification processes,” and that the impact was already being felt:

Beginning in 2023 and continuing in 2024, FinCEN has observed an increase in suspicious activity reporting by financial institutions describing the suspected use of deepfake media in fraud schemes targeting their institutions and customers.

That same year, global accounting firm Deloitte warned that generative AI could push annual US fraud losses to $40 billion by 2027, up from $12.3 billion in 2023. It also noted that scamming software available for sale on the dark web “is making a number of current anti-fraud tools less effective.”

AI’s ability to fake faces and voices isn’t just undermining authentication, it’s also going to turn long-standing security advice into a liability.

For years, security professionals have told people to use a second channel of communication to verify suspicious messages. The advice goes like this: if you get a text or email from your CFO requesting an urgent money transfer, call them—or better still, start a video call—to confirm it’s really them.

In the age of AI, advice like that is obsolete.

In June, Huntress described an attack that began when an employee at a cryptocurrency foundation received an invitation to a Zoom meeting with several apparent senior leaders from their company. During the meeting, the employee’s microphone didn’t work, so the “colleagues” advised downloading a Zoom extension to fix it.

But the extension wasn’t real—and neither were the attendees. The only genuine participant was the victim, whose computer was soon flooded with malware from the fake extension.

The setup resembled a 2024 attack against engineering giant Arup, where an employee was convinced to transfer $25 million to criminals. In that case too, a Zoom call filled with deepfake senior managers provided enough proof and pressure to push the victim into action. For now, phishing emails remain more common than Zoom calls full of synthetic colleagues, but these attacks are canaries in the coal mine. The AI that enabled them is the most expensive, least efficient AI criminals will ever have access to, and AI capabilities are doubling roughly every seven months.

AI is undermining some important pillars of cybersecurity, and authentication is one of the most vulnerable areas, so what can organisations do?

It’s clear that using a second channel of communication to verify a high-value request is now insufficient. Security advice will have to change to reflect this new reality, and organisations will need other ways to ensure that people on calls are real. The easiest way to do that will be to agree secrets in person, or to use one-time passcodes, that can be exchanged on calls to authenticate the participants.

It is hard to see how authentication that relies on a voice or video can have any significant future, but thankfully there are some good alternatives to hand.

As Sam Altman alluded to, AI offers no advantage to cracking strong random passwords, so strong passwords and multifactor authentication are likely to get a new lease of life for now.

The long-term successor to passwords, FIDO2 cryptographic passkeys, is similarly AI-resistant. AI can’t be used to intercept or crack passkeys, and is unlikely to threaten the on-device biometrics that secure them—certainly not at scale—and if AI ever does, the biometrics are easily changed.

The rise of AI deepfakes makes effective Endpoint Detection and Response (EDR) more important too. Generative AI is giving criminals new ways to enter business networks, but the targets of their attacks remain the same: money transfers, or data stored on endpoints and servers where EDR stands guard. Security has always been a matter of defence in depth, and when one layer is weakened, the others must take up the slack.

To learn more about how AI is transforming the landscape of cybercrime, empowering attackers to launch more sophisticated attacks, and ushering in a new era of autonomous AI-driven threats, read Cybercrime in the Age of AI.