How to defend your organization against GenAI-enabled threats
AI tools like ChatGPT can be used to revolutionize your business, but it can also be used by bad actors to attack your systems. Here's how to mitigate that risk.
Sep 11, 2023 • 5 Minute Read
Generative AI (GenAI) has revolutionary potential for businesses, epitomized by the overnight popularity of ChatGPT. Whether it’s writing content, providing customer support, automating tasks, or debugging and writing code, new applications are being discovered daily.
Unfortunately, all this potential comes with a sobering reality: it can also be harnessed for malicious ends. AI models like ChatGPT can be used for sophisticated phishing, malware, and impersonation schemes, and the model itself can potentially be attacked to expose your data.
In this new age of AI, every organization needs to be aware of these potential threats and take appropriate actions to mitigate their risk. Below is a list of ways you can minimize the threat of bad actors using GenAI tools like ChatGPT to damage your systems.
(For a more robust list of ways attackers can use generative AI to target an organization, read my article: “ChatGPT: Write me a virus.”)
1. Start with well-known cybersecurity standards teams know
You may have heard it a million times before, but it’s still just as true as it was before ChatGPT:
Don’t follow links or instructions in unexpected emails.
Double-check facts, especially ones that generate inflammatory responses.
Don’t ever give private information to any untrusted individual (or machine!).
2. Use generative AI strategically
One way to minimize misinformation and exposure is to only use things like ChatGPT for the really important things. You should also make sure what you’re using is actually a good strategic use of the tool: there’s things generative AI is good at, and things it is bad at. Stay abreast on this topic to ensure that your organization is using the right tool for the right task.
One way to find out what an AI is good at is to check its AI model card, which ideally provides this information: what it’s good at, what it’s not good at, and what it’s never been tested for.
3. Use Generative AI in your threat detection and response
GenAI can be used by attackers, but it can also be used by defenders — you! Mainstream cybersecurity experts now have access to two tools in their arsenal: Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs).
To learn more about these tools, read this article: “Pure magic: How to use GenAI in threat detection & response.”
4. Practice model provenance
Before you jump in with an AI model like ChatGPT, you should find out all the details of the model: this is known as model provenance. Model provenance is the documented history and origins of an AI model: the data used to train it, the architecture and parameters it uses, any dependencies it has, and other relevant details.
Remember that as more models arrive on the market, ChatGPT will be just one of many options. Be clear about which sources and models are trusted. Again, AI model cards are one way to find out this information.
For more information on vetting AI models, read this article: “Security reviews and AI models: How to decide what to greenlight.”
5. Create a culture of critical thinking
One of the most important soft skills in this new age of AI is critical thinking. If you’re a leader, this is something you can actually promote, not just hire for. Critical thinkers are less susceptible to the phishing, impersonation, and disinformation schemes that bad actors can use GenAI for.
For more information on promoting critical thinking in your organization, read this article: “How to improve your team's critical thinking skills.”
6. Install organizational guidelines for ethical AI use
It’s great that ChatGPT aims to be ethical. However, not all generative AI tools will, and not all ethics translate well. Make sure to establish your own set of ethics for using these tools and a method for reporting violations.
7. Put a generative AI usage policy in place
If you ban generative AI outright, this may lead to unregulated “shadow” usage of AI tools, which attackers may then exploit. To mitigate this risk, write a simple policy that focuses on the safeguarding of personal information and double-checking the information the AI provides.
To learn how to write a GenAI usage policy, read this article: “Organizations, don’t ban generative AI: Write a usage policy.”
8. Engage in vulnerability testing
It’s a sad fact of life, but all software has bugs: engaging in vulnerability testing helps mitigate the risk of these bugs. Test the AI tools you use for the types of prompt injection that it is vulnerable to.
9. Practice access control, authentication, and monitoring
If your organization is hosting its own generative AI tools, remember the security basics of knowing who did what, and when. This way, you can respond to unauthorized or suspicious behavior.
10. Remember, the best cybersecurity defense is a well-informed team!
Don’t assume that everyone understands the AI landscape. Educate your organization not only on the compelling power of generative AI, but also its risks.
Conclusion
Make sure your organization is taking proactive steps to not only take advantage of all the good things that generative AI can bring to your business, but also to defend against bad actors using it. One of the best things you can do is get (and stay) informed, reading articles just like this one.
Further resources
Want to learn more about artificial intelligence, including how and when to use it? Pluralsight offers a range of beginner, intermediate, and expert AI and ML courses, including dedicated courses on generative AI and tools like ChatGPT. You can sign up for a 10-day free trial with no commitments. Here are some you might want to check out:
If you have cloud infrastructure, you might also want to check out the A Cloud Guru course, “How to Secure Cloud Infrastructure with Generative AI.”