How to create a flywheel for generative AI security operations
Get insights from Steve Schmidt, CSO of Amazon, on how to mitigate generative AI security risks and boost innovation with a flywheel and scoping matrix.
Jun 13, 2024 • 4 Minute Read
During the AWS re:Inforce 2024 keynote, securing generative AI—while maintaining speed—was top of mind.
Steve Schmidt, CSO of Amazon, shared insights on how to scope generative AI, create a security operations flywheel, and move quickly without compromising customer trust.
Table of contents
How to define security needs: Generative AI security scoping matrix
“Security fundamentals are evergreen,” said Steve.
And those fundamentals, like identity and access management, vulnerability testing, and threat modeling, need to expand to encompass generative AI.
The security practices you should focus on will depend on the scope of your generative AI solution. AWS’s generative AI security scoping matrix identifies five types of generative AI use cases.
Consumer app: Using a public, third-party generative AI solution like ChatGPT
Enterprise app: Using a third-party enterprise application with embedded generative AI capabilities
Pre-trained model: Building an application on an existing, pre-trained model from a third party
Fine-tuned model: Fine-tuning an existing third-party model with your business’s data
Self-trained model: Building and training a generative AI model with your business’s data
Once you’ve determined the scope of your generative AI solution, you can answer the following questions to secure it:
Where is my data?
What happened with my query and associated data?
Is the output of these models accurate enough?
What do I need to think about governance, compliance, legal and privacy concerns, controls, resilience, and risk management?
How to create a security-operations flywheel for generative AI
Regardless of your project’s scope, your security team possesses limited resources, and the rising need for AI expertise only adds to this challenge.
“Finding AI talent is hard. Finding security talent is hard. Finding talent that understands the intersection of those two is obviously even harder,” said Steve.
When you do find AI security experts, you may be tempted to involve these people in every project. “It’s logical, but it doesn’t scale,” he explained.
These security professionals essentially become gates and roadblocks, slowing business innovation. Eventually, software developers will find ways to route around the security teams. Steve explained how to overcome this challenge and accelerate generative AI innovation with a flywheel.
Build a core generative AI security team
First, build a team of dedicated AI security experts. These experts aren’t reviewers who slow progress. Instead, they enable rapid testing and innovation by giving developers and researchers tools to explore genAI safely.
They build AI security solutions other teams can use, establish guardrails and use cases, and connect people and resources throughout the company to accelerate the delivery process.
“They’re a gas pedal, not a gate,” said Steve.
Develop generative AI security standards
The AI security team should develop guidelines to handle confidential data, models, and prompt workflows. Standards set expectations from the start and ensure security isn’t an afterthought.
Create a threat modeling guide for genAI applications
A threat modeling guide will help developers as they build generative AI applications. They’ll understand how to work through risks for each app and mitigate them systematically.
Produce internal testing tools and share the results
Tests allow you to see how generative AI solutions deal with interesting prompts. As teams learn, put that knowledge in one place so everyone can benefit from other teams’ findings.
Conduct regular security reviews
Organizations re-execute AppSec reviews when there are significant code changes. But for AI-enabled applications, you’re not dealing with stagnant code. As such, your organization needs a constant review and audit process with updated guardrails and AI-security-specific tasks.
“In AI, the model is the code,” said Steve. “Responses change over time as users interact with them. You’re never done with AppSec reviews.”
All together, these elements create a flywheel with momentum, allowing teams to deliver solutions quickly and securely. Throughout the cycle, teams continue to iterate and implement new findings.
4 areas to focus on for generative AI security
How do you put these principles into practice? Steve shared four tips.
Handle sensitive data when training models
When training AI models, keep sensitive data secure. Anonymize data and implement data perimeters. Know what you have, where it’s stored, who has what access and why, and how the data is actually used over time.
Apply trust boundaries to retrieval-augmented generation (RAG)
Establish trust boundaries to ensure generative AI applications carry the right user context and access only data users are authorized for.
Perform continued testing for AI models
We already covered continuous testing and security reviews, but it bears repeating. Regularly test models for potential injection issues, data leakage, and other security risks.
Implement security guardrails on AI system inputs and outputs
Create safeguards and guardrails on the inputs and outputs of AI solutions. This ensures the system avoids certain terms, topics, or responses. These should constantly evolve to meet the changing landscape.
Build generative AI and security skills
“Move quickly without compromising customer trust and security,” said Steve. That’s the end goal of generative AI security.
Explore our AWS re:Inforce 2024 coverage:
AWS re:Inforce: Take these generative AI and cloud security measures
How AWS and Intuit build a culture of security in their organizations
Security best practices for accelerating generative AI innovation
As an AWS Partner, Pluralsight empowers teams to build AI and security skills. Learn more about our tech skill development platform.