Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

The seven most dangerous new threats you need to know about

Whether you're in cybersecurity or not, you should know about many new dangers to your organization, especially if you’re in executive management.

May 15, 2024 • 10 Minute Read

Please set an alt value for this image...
  • Engineering Leadership
  • Security
  • Business

Every year, industry experts gather at RSA Conference, one of the world’s largest cybersecurity conferences. Here are seven of the largest new attacks and threats that were mentioned at the conference — and what you should do to mitigate them.

The seven most dangerous new threats

1. Attackers using AI to discover weaknesses in your code

If you give GPT-4 a list of real-world vulnerabilities, it can exploit 87% of them without human intervention.

That’s a terrifying statistic. Why? It means someone who wants to hack the applications you use can simply feed a publicly disclosed list of vulnerabilities, like those cataloged in the CVE, and then set it loose on your company.

Now, human hackers have been able to do this forever — find gaps in your defenses, common exploits unpatched — but the speed and ease an AI can do it is what changes the game.

The result is your applications are suddenly very vulnerable if they’re unpatched. This means zero-day attacks (where a vulnerability is publicly known, but a patch isn’t out) and one-day attacks (where there’s a patch out, but it hasn’t been applied yet) are a big, big threat.

What you should do

  • Fight fire with fire, and leverage AI for defense. Human security teams can’t patch vulnerabilities as fast as malicious AI can detect them, so you need to make sure you’re equally equipped.
  • Get your security teams well educated in AI
  • Automate your purple testing, creating a loop where you’re simulating malicious attacks and implementing remediation strategies.
  • It’s actually cheaper to get a chatbot to find exploits at $8.80 per vulnerability, whereas a human pen tester costs about $25 per vulnerability if it takes them a half hour. Large language models (LLMs) are also scalable.
  • Obviously, make sure your applications are patched.

2. Having your company’s GenAI exploited and weaponized

“We’ve got to have GenAI in our company,” a C-suite member excitedly exclaimed. And so, now you’ve got a chatbot or some other AI bolted into your product. 

You’ve also now got a big door attackers can use to cause damage to your business. But that’s okay — you’ve innovated, right? 

</endSarcasm>

… Except it’s the truth. IBM found in a survey that 70% of C-suite respondents believe that, when it comes to AI, innovation takes precedence over security. And while 82% said “secure and trustworthy AI is essential to the success of their business,” only 24% of them said they’re actually securing their GenAI products.

That’s pretty bleak. An insecure LLM can present the following risks:

  • Prompt injection: This one’s a biggie. An attacker tricks your nifty AI into revealing sensitive information or performing unauthorized actions.
  • Insecure output handling: This can be used to steal your data or manipulate systems.
  • Training data poisoning: An attacker ‘poisons’ your shiny AI in the training phase, so the outputs can be something not business friendly — like a backdoor attack, or your company-branded LLM suddenly making racist comments.
  • Supply chain vulnerabilities: Attackers go after the third-party plugins used by your LLM, which can allow for unauthorized access to private data.

There’s also indications that in the future, attackers will co-opt your friendly neighborhood chatbot — along with hundreds of its pals — to perform coordinated attacks. This makes it especially important that your newly branded AI is not moonlighting as a criminal.

What you should do

  • Adopt established frameworks for your LLM such as Google’s Secure AI framework (SAIF) and Nist’s AI Risk Management Framework
  • Conduct thorough threat modeling, data validation, and enforce the principles of least privilege
  • Integrate GenAI security controls into your existing application security programs, so it’s not in an isolated security silo
  • Stay informed on evolving threats and adapt accordingly

3. Attackers using GenAI for sophisticated spear phishing

In an age where you can clone someone’s voice with just a three-second sample of them talking — like, say, a voice mail — and generate entire videos with just a text prompt, verifying someone’s identity is going to get very tough, very fast. 

It’s even worse when you can feed someone’s social media activity into a Chatbot, and train them to mimic someone. Critical thinking is the best way to deal with AI scams, but only a small portion of people have this skill.

If you’re hoping companies will AI-generate content, don’t bank on it. Even if they do, these telltale signs can be easily stripped out, and it’s not going to be enforceable.

This is going to be a big issue for establishing someone’s identity remotely, and for employees working remotely. However, in a world that’s moving increasingly to remote — and large organizations that need to have a distributed workforce — identity verification is a challenging problem.

What you should do

  • Think on how you’re going to establish someone’s identity (both customers and employees) and re-establish that identity
  • Use AI to identify and flag unusual behavior for review

Help your teams hone their critical thinking skills (Here’s a guide), but don’t bank on it — humans are always the last and weakest line of defense in the security stack

4. Sextortion of your employees and C-level using GenAI

This is one of those “super uncomfortable but highly important to discuss” topics. Sextortion isn’t new — ever get an email saying “I know what you have on your hard drive?” — but with advances in Generative AI, sextortion now poses a very real threat to you and your organization. Everyone, right up to the C-level, should be aware of this threat, because anyone can be a target.

Sextortion is when you’re contacted by a malicious actor who threatens to release sensitive photos or videos of you if you don’t cough up a massive sum of money. If you don’t have it — which is likely — you’re offered an “alternate” form of payment such as giving access to a network, installing malware, or some other way of compromising a system.

Now, the photos might be fake, but the damage is not. The fear that your coworkers, family, and friends may never see you the same, even if you claim they were AI generated, is very real. 

Take a moment to imagine all the images someone could artificially generate of you, and then share it on your social media, or send directly to your company. In a company of hundreds or thousands of people, how many people do you think would hold their ground, or give the ransomers what they want?

The good news is, there are steps you and your company can take to mitigate this. If you take away one thing from this article, it’s this: with generative AI, everyone is at risk from sextortion (and this means you).

What you should do

  • Get educated. Executive employees are the most at risk, and you should have an executive protection program in place.
  • Educate your whole company on what sextortion is, and what an attack might look like. It’s uncomfortable, but these attacks thrive in an environment of ignorance.
  • Do not stop at educating employees, but also their family. Most professionals are working from home, and so their family members can be used to compromise your corporate systems. One resource is “No Escape Room”, which is a sextortion simulation created by the National Center for Missing and Exploited Children (NCMEC) used to educate parents and minors.
  • Create a safe reporting procedure where employees can report sextortion attempts without feeling ashamed, worrying they could lose their jobs, or others finding out about their personal lives.
  • Practice basic cyber hygiene, such as not clicking on suspicious links, having MFA in place, and having appropriate security detection and prevention tools in place.

5. Your company’s Multi-factor Authentication (MFA) being intercepted

People are tired of push notifications from MFA. As a result, they’ll click nearly anything when it pops up. As a result, they’re vulnerable to a form of attack called attacker-in-the-middle, where an attacker gets you to log into a false site. Once you log in, they pass your credentials to the actual site, and when you enter the MFA code, they capture it and log into your account.

This form of attack doesn’t require skill, with kits freely available for anyone to perform this sort of intercept. And since the fake website runs on a server somewhere and not your local machine, antivirus and similar products are unlikely to detect it.

MFA is still better than nothing, but it’s not a silver bullet. If you’re sitting thinking “our company is secure, we’ve got MFA”, you should rethink this strategy.

What you should do

  • Rather than just confirming a login attempt, require users to type in a code from the login screen to authenticate, because the threat actor wouldn’t know it 
  • Add additional context to your push notifications, such as the user’s sign in location
  •  Tighten your authentication regimes, such as if the login is happening during an unusual time (such as midnight in their time zone) 
  • Move to a passwordless security system, such as using passkeys

6. Hidden tech debt leaving your applications vulnerable

Tech debt is a threat not only to your applications but your security stack. It can be used to attack both your organization and your customers, causing catastrophic damage.

While tech debt isn’t new, it’s becoming a bigger problem as applications are either built on top of without rewriting the old code, or people skip incremental updates for the products they’re using. The result is you might have a product that says it was last updated in 2024, but beneath the surface, there may be a web of legacy code that’s two decades old.

This code is often written in an old-school language none of your current developers are familiar with, such as Perl, and it is likely not documented at all. This turns it into a ticking time bomb to be exploited by malicious actors.

What you should do

  • Dedicate time to review and rewrite your old code
  • Incrementally upgrade your components
  • Leverage AI to help you
  • Train new developers in how to transcribe old code
  • Document your existing code and any new changes

7. Having a lack of trained (and alert) cybersecurity staff to defend with

You’re probably sick of hearing there’s a shortage of cybersecurity professionals — 71% of organizations have unfilled cybersecurity positions, according to the World Economic Forum. However, there’s a consequence to this: the guards protecting your company are understaffed and burned out.

Now, like most of the items on this list, this isn’t new. And if you’re sensing a theme, this existing problem in cybersecurity has been made worse by AI. 

Why? They’ve got to be more alert than ever, given the AI tools threat actors now have. However, very few tech practitioners — roughly to the tune of 12% — actually have significant experience working with AI themselves.

It’s not exactly great when your castle defenders are using bows, and the army outside are using rocket launchers.

What you should do

  • Upskill your cybersecurity team in using AI for defense
  • Also let them know how they can use AI to reduce the burden of their job — inbound message filtering, summarizing incident reports, process automation, and filtering bug bounty challenges
  • Get informed (and stay that way) on how threat actors are using AI
  • Source new staff from within your organization who may not be in tech, but possess the relevant soft skills (e.g. Continuous learning), then upskill them
  • Just make the working environment awesome for your cybersecurity team: promote a culture that prioritizes their well being, recognizes achievements, and provides opportunities for professional growth

Conclusion: Start risk mitigation now, before it’s a problem

With all of these major threats, there are tangible steps you can take to minimize the impact to yourself, your organization (and in the case of number four, your immediate family). Write them down and either make sure you’re taking them, or ask your organization if it’s taking these steps.

Start building in-demand AI and security skills

Adam Ipsen

Adam I.

Adam is a Lead Content Strategist at Pluralsight, with over 13 years of experience writing about technology. An award-winning game developer, Adam has also designed software for controlling airfield lighting at major airports. He has a keen interest in AI and cybersecurity, and is passionate about making technical content and subjects accessible to everyone. In his spare time, Adam enjoys writing science fiction that explores future tech advancements.

More about this author