Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

How to create a flywheel for generative AI security operations

Get insights from Steve Schmidt, CSO of Amazon, on how to mitigate generative AI security risks and boost innovation with a flywheel and scoping matrix.

Jun 13, 2024 • 4 Minute Read

Please set an alt value for this image...
  • Software Development
  • Cloud
  • Tech Operations
  • AWS
  • AI & Data
  • Cybersecurity

During the AWS re:Inforce 2024 keynote, securing generative AI—while maintaining speed—was top of mind. 

Steve Schmidt, CSO of Amazon, shared insights on how to scope generative AI, create a security operations flywheel, and move quickly without compromising customer trust.

Table of contents

How to define security needs: Generative AI security scoping matrix

“Security fundamentals are evergreen,” said Steve. 

And those fundamentals, like identity and access management, vulnerability testing, and threat modeling, need to expand to encompass generative AI.

The security practices you should focus on will depend on the scope of your generative AI solution. AWS’s generative AI security scoping matrix identifies five types of generative AI use cases.

  1. Consumer app: Using a public, third-party generative AI solution like ChatGPT

  2. Enterprise app: Using a third-party enterprise application with embedded generative AI capabilities

  3. Pre-trained model: Building an application on an existing, pre-trained model from a third party

  4. Fine-tuned model: Fine-tuning an existing third-party model with your business’s data

  5. Self-trained model: Building and training a generative AI model with your business’s data

Once you’ve determined the scope of your generative AI solution, you can answer the following questions to secure it:

  • Where is my data?

  • What happened with my query and associated data?

  • Is the output of these models accurate enough?

  • What do I need to think about governance, compliance, legal and privacy concerns, controls, resilience, and risk management?

How to create a security-operations flywheel for generative AI

Regardless of your project’s scope, your security team possesses limited resources, and the rising need for AI expertise only adds to this challenge.

“Finding AI talent is hard. Finding security talent is hard. Finding talent that understands the intersection of those two is obviously even harder,” said Steve.

When you do find AI security experts, you may be tempted to involve these people in every project. “It’s logical, but it doesn’t scale,” he explained.

These security professionals essentially become gates and roadblocks, slowing business innovation. Eventually, software developers will find ways to route around the security teams. Steve explained how to overcome this challenge and accelerate generative AI innovation with a flywheel.

Build a core generative AI security team

First, build a team of dedicated AI security experts. These experts aren’t reviewers who slow progress. Instead, they enable rapid testing and innovation by giving developers and researchers tools to explore genAI safely.

They build AI security solutions other teams can use, establish guardrails and use cases, and connect people and resources throughout the company to accelerate the delivery process.

“They’re a gas pedal, not a gate,” said Steve.

Develop generative AI security standards

The AI security team should develop guidelines to handle confidential data, models, and prompt workflows. Standards set expectations from the start and ensure security isn’t an afterthought.

Create a threat modeling guide for genAI applications

A threat modeling guide will help developers as they build generative AI applications. They’ll understand how to work through risks for each app and mitigate them systematically.

Produce internal testing tools and share the results

Tests allow you to see how generative AI solutions deal with interesting prompts. As teams learn, put that knowledge in one place so everyone can benefit from other teams’ findings.

Conduct regular security reviews

Organizations re-execute AppSec reviews when there are significant code changes. But for  AI-enabled applications, you’re not dealing with stagnant code. As such, your organization needs a constant review and audit process with updated guardrails and AI-security-specific tasks.

“In AI, the model is the code,” said Steve. “Responses change over time as users interact with them. You’re never done with AppSec reviews.”

All together, these elements create a flywheel with momentum, allowing teams to deliver solutions quickly and securely. Throughout the cycle, teams continue to iterate and implement new findings.

4 areas to focus on for generative AI security

How do you put these principles into practice? Steve shared four tips.

Handle sensitive data when training models

When training AI models, keep sensitive data secure. Anonymize data and implement data perimeters. Know what you have, where it’s stored, who has what access and why, and how the data is actually used over time.

Apply trust boundaries to retrieval-augmented generation (RAG)

Establish trust boundaries to ensure generative AI applications carry the right user context and access only data users are authorized for.

Perform continued testing for AI models

We already covered continuous testing and security reviews, but it bears repeating. Regularly test models for potential injection issues, data leakage, and other security risks.

Implement security guardrails on AI system inputs and outputs

Create safeguards and guardrails on the inputs and outputs of AI solutions. This ensures the system avoids certain terms, topics, or responses. These should constantly evolve to meet the changing landscape.

Build generative AI and security skills

“Move quickly without compromising customer trust and security,” said Steve. That’s the end goal of generative AI security. 

Explore our AWS re:Inforce 2024 coverage:

As an AWS Partner, Pluralsight empowers teams to build AI and security skills. Learn more about our tech skill development platform.

Aaron Rosenmund

Aaron R.

Aaron M. Rosenmund is a cyber security operations subject matter expert, with a background in federal and business defensive and offensive cyber operations and system automation. Leveraging his administration and automation experience, Aaron actively contributes to multiple open and closed source security operation platform projects and continues to create tools and content to benefit the community. As an educator & cyber security researcher at Pluralsight, he is focused on advancing cyber security workforce and technologies for business and national enterprises alike. In support of the Air National Guard, he contributes those skills part time in various initiatives to defend the nation in cyberspace. Certifications: GIAC GCIA, GIAC GCED, CCNA Cyber Operations, Pentest+, CySa+, CASP www.AaronRosenmund.com @arosenmund "ironcat"

More about this author