Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

The impact of AI: Cybersecurity challenges and opportunities

We provide an overview of how AI impacts security, including the cybersecurity challenges it poses and how orgs can use it to boost their security stance.

Nov 10, 2023 • 5 Minute Read

Please set an alt value for this image...
  • AI & Data
  • Business & Leadership
  • Cybersecurity
  • Public Sector

AI creates new cybersecurity threats at the same time it provides new ways to strengthen your organization’s defenses. We talked to experts about the impact of AI on cybersecurity and how organizations can use it to their advantage—while protecting their data.

Table of contents

The growing role of artificial intelligence in security

Historically, the cybersecurity skills gap has prevented threat actors from leveraging AI to its fullest. But the accessibility of generative AI tools is changing this.

“Thankfully, for the majority of the last 20 years, it was rare that the collection of skills required to leverage AI technologies and the malicious technical prowess of attacks existed in the same space,” explains Aaron Rosenmund, Pluralsight’s Senior Director of Security & GenAI Skills. “[But] in the last two years now, the advancements and availability of generative AI have made these assistive tools available to all.”

This means more people can use AI tools to create sophisticated attacks. But as Aaron notes, tools like AI Copilot and Security Copilot benefit both sides. “Much of the requirements for malware development are simply good coding practices, and the copilot capabilities benefit both sides alike. Where the capabilities progress in the coming years in automated defenses, I am excited to see.”

Advantages of AI in security

Let’s start with the advantages of using AI in security. Security professionals are often caught in a never-ending game of catch-up against varied threats and bad actors. AI tools help them address modern cybersecurity challenges faster.

Automating threat detection and response

As Pluralsight Author and security advisor John Elliott notes, “We're going to be using AI to defend ourselves better.”

Part of that involves threat detection and incident response. AI algorithms can analyze data in real time to identify patterns that indicate potential threats. Learning from historical data, they can even reduce false positives, flag anomalies for closer inspection, and identify zero-day attacks that were previously undetected.

AI can also help automate parts of incident response. “I’ve seen AI play a lot with automation and just looking for different threats and identifying malicious behavior or anomalies happening within the security realm of things, especially as it pertains to cloud workloads,” said cybersecurity professional and Pluralsight Author Chris Jackson.

Learn how to defend your organization against GenAI-enabled threats and use GenAI in threat detection & response.

Using AI for comprehensive risk assessment

AI tools can quickly comb through large amounts of data to analyze potential vulnerabilities and determine how dangerous they are to the business. This empowers security professionals to prioritize the most immediate threats.

If a risk assessment flags certain vulnerabilities, organizations can take steps to secure their organization and develop a more proactive security stance. AI can help with this, too. Dr. Lyron H. Andrews, security expert and Pluralsight Author, shares an example: “Specifically in my area, I see training models that are related to protecting attack surfaces and exposing vulnerabilities in those learning models.”

Ensuring enhanced compliance with AI

From privacy concerns to company policies and executive orders from the federal government, staying compliant with AI guidance can be a challenge. But you can also use AI to assist with security compliance, AI related or not.

For one thing, you can use it to streamline compliance tasks, minimize errors, and reduce potential fines. You can also use it to stay current with the latest regulations. AI models can constantly analyze regulatory databases and reliable news outlets for updates. This allows your organization to implement those changes in your compliance systems as soon as possible.

Improving cost management

While AI solutions can be hefty upfront investments, they can pay dividends down the line. According to the 2023 IBM Cost of a Data Breach report, the average cost of a data breach is $4.45 million. The good news? Organizations using security AI incurred fewer data breach costs compared to businesses not using AI-based cybersecurity tools.

Other uses for AI security solutions

AI security tools and cybersecurity solutions can also help with:

  • Threat intelligence

  • Vulnerability detection

  • Data maintenance

  • Visibility and observability 

Negative impact of AI in security

While AI can have a positive impact on security, it’s created new cybersecurity challenges as well.

AI and cybersecurity skills gaps persist

Only 17% of technologists are completely confident in their cybersecurity skills. And even less (12%) are completely confident in their AI/ML skills. This knowledge gap, on top of existing mental health concerns, makes it hard for security professionals to defend against AI-powered threats and leverage AI as a security tool. 

And even with the AI advantages, security professionals still need upskilling to understand how AI interacts with other tools and technologies. “For information security professionals, there's going to be a wealth of new learning in how security and privacy in AI work together,” says John.

Learn more about the IT skills gap.

Deepfakes enhance social engineering

Threat actors often use social norms and pressure to trick people into giving away information. And AI is helping them. “Deepfakes, voice emulation, and the like are really enabling social engineering,” says Aaron.

“This initial access then allows them to gain network access to environments. From here, they still have to enumerate the environment, laterally move to other parts of the network, escalate privileges, download tools or ransomware, communicate over C2 protocols, and then exfiltrate data or encrypt devices. At each of those points, they should be caught.”

Threat actors use AI to launch cyber attacks

“We have attackers, and that can be anything from script kiddies to nation states, using AI to perfect and improve their attacks,” says John.

With generative AI, people without extensive coding experience can now write malware and bot attacks and launch larger, more sophisticated attacks. This has increased the number of threats and the number of attackers.

Threat actors are also taking advantage of the people and organizations using AI tools by attacking the data and models behind them. They’re mounting attacks using things like:

  • Prompt injections

  • Open-source code injection

  • Data poisoning

  • Model evasion

“I see more use of AI for the offensive sort of thing,” says Chris. “So hacking, for example, just helping develop different scripts, creating tools a lot faster, even providing a different perspective because, you know, a lot of hacking, a lot of offensive security depends on thinking outside the box, thinking of ways to manipulate systems and make them act in certain ways that they shouldn't be acting in.”

The future of AI in security

AI is already making waves in cybersecurity, and it will only accelerate the threat landscape as time goes on. Train your teams in AI to protect against these threats—and use it to boost your organization’s defenses.

Here are some courses to get started:

Learn more about our AI resources and cybersecurity solution.

Pluralsight Content Team

Pluralsight C.

More about this author