Don't rely on users to spot AI scams, expert warns
Training employees was never a great preventative strategy, even before AI. Instead, cybersecurity teams need to stop scams before it falls down to the user.
Aug 12, 2024 • 6 Minute Read
In February 2024, a company lost $40 million to scam artists using AI deep fakes. An employee entered a meeting attended by the chief financial officer, coworkers, and a few externals—or so it appeared. Only after the damage was done did they find out they were all AI generated fakes.
Phishing scams are hardly new. Even before ChatGPT waltzed onto the scene and turned our lives upside down, nine out of ten cyber attacks began with a phishing email. Like Sisyphus, cybersecurity teams are burdened with pushing the boulder of user awareness and training up the hill, only to have it constantly roll back down again.
Maybe, just maybe, we’re going about this the wrong way.
I spoke to Pluralsight author and renowned cybersecurity specialist John Elliott, who represented Mastercard and Visa authoring the world’s PCI DSS standards. If you’re not familiar, you use them everyday—they’re what keep your payment card information safe from theft and fraud. He also specializes in the way people engage with security directives.
Scammers will always outsmart users at spotting scams
“One of the first things is we shouldn’t expect a lot of people in our organizations to reliably spot phishing emails in the first place,” John said. “Many people will disagree with this.”
“However, it is the scammer’s full time job to try to trick you into doing something. If they spend a week crafting a perfectly tailored scam message, then follow up with a voice mail, it’s no surprise that users are going to be tricked by this—especially since the scammer has AI tools which makes them better at it.”
John said in comparison, an everyday user did not come into work with the express purpose of thwarting these scams, and spent far far less time (if any) thinking about it.
“Everyday employees don’t come into the office to do information security or cybersecurity. Their day job is doing their day job,” John said. “It’s what they get remunerated on, and it’s what they get excited about. It’s old but I’d encourage all cybersecurity professionals to read Users Are Not The Enemy by Adams and Sasse.”
Being cybersecurity-aware requires people's brains to work differently, which isn’t instinctive
John highlighted how Daniel Kahneman’s book Thinking Fast and Slow explains why phishing is successful. He explained that there were two types of thinking people use. There’s System One thinking, which is fast, automatic, and intuitive, allowing us to make quick decisions. Then there’s System Two thinking, which is slow, deliberate, and conscious, requiring conscious effort and used for analytical tasks.
“With processing and responding to email, people use System One thinking,” he said, “They get a large volume of messages they have to process, every day and so email is a System One task, it’s very automatic now. Phishers use techniques such as urgency and triggering an emotional response, which encourage System One thinking and people respond before their slow-thinking System Two brain would have spotted something phishy.”
“When you’re asking users to look for and flag scam emails, you’re asking them to go about System Two thinking, which means slowing down and forcing your conscious brain to evaluate each email – and that’s not what people do . Put simply, they’re not going to do it consistently.”
Companies need to stop AI scams reaching users
“Rather than investing more and more in user training, we as cybersecurity professionals need to up our game to prevent that material getting in front of our users in the first place,” John said.
“Users are at best a detective control, not a protective control. You shouldn’t rely on a human to be a good protective control unless they’ve got a badge and a radio, and they’re standing in front of your server room.”
“What we have to do is make sure we up our defenses, so your email security gateway, which everyone has, is also powered with the same degree of AI that the scammers are using.”
The basics of phishing remain the same for defenders
“At the end of the day, phishing is generally about four things: divulging credentials, clicking on links that will download malware, having malware attachments and scams.”
“We can defeat phishing for credentials by adopting phishing-resistant multi-factor authentication (MFA), and the one thing that AI scams should be teaching everyone is if they haven’t adopted this already, they should make this a priority. Phishing-resistant MFA won’t stop everything, but it will stop an awful lot.”
“In terms of downloading, people need to make sure they have very good proxies in place. Again, you can use an AI in the system to detect if a file is malware or not.”
“Thirdly, with email attachments, organizations need better protection at the edge. The email attachments need to be screened at this level before they get into people’s inboxes, so you need to invest in technical controls so humans are once again the last line of defense.”
Training users still has its place for detecting scams
“Humans are still a line of defense, even if it’s the very last one,” John said. “You can still do awareness training on things that are almost always red flags, such as sending gift cards or making large, unusual transactions. By you should also protect yourself by implementing scam-resistant business processes, such as validating changes to a supplier’s bank account by telephone ”
With criminals adopting systems that can produce more realistic fakes and aiming for high-value financial transactions involving two C-level executives, John said a set of shared phrases might be helpful at combating fraud.
“In the situation that a CFO wants to speak to the CEO about a transaction like this, it might sound a bit cold war, but the CFO might say a phrase like ‘It’s raining heavily this year’, and the CEO says ‘Not in Lake Tahoe.’ With deep fakes, you use the code to make sure you’re speaking to the right person.”
John said the passcode should not be something that can easily be discovered via social media, such as the name of a family member, but a secret only shared between the two people involved.
Conclusion: AIs and well-architected defense, not humans, will be pivotal in detecting AIs
Every cybersecurity professional knows about “defense in depth” — the idea of having multiple layers of security controls to protect an organization. As scammers leverage AI, so too do cybersecurity teams need to proportionally start using it to defend their users, John said.
One way to do this is to become knowledgeable about these latest advancements. And again, make sure your organization is using phishing-resistant multi-factor authentication, proxy systems, and technical controls.
“Just as it’s always been, no one solution is going to stop everything, and you should not put all your eggs in just one or two baskets—especially if one of those baskets is user awareness training.”
For more insights from John Elliott, check out this video where he explains why you should pen test frequently, and drill down to the cause of vulnerabilities instead of just fixing them.
Additional reading
If you’re interested in reading how to defend against generative AI, or learn more about cybersecurity best practices, check out the following articles:
- How to defend your organization against GenAI-enabled threats
- The seven most dangerous new threats you need to know about
- Improving cybersecurity with LLMs: How OpenAI says to do it
- The future of cybersecurity teams? 4 digital staff for every human
- Organizations, don’t ban generative AI: Write a usage policy
Interested in growing your career, and finding out how much a Chief Information Security Officer (CISO) makes in 2024? You might enjoy reading this article, “How to land 2024's top in-demand tech leadership roles.” Networking is a key component, which is why we’ve also written this guide: “How to network in tech and land jobs (An introvert-friendly guide).”