Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

Security best practices for accelerating generative AI innovation

Learn how security leaders and data scientists from AWS and Bloomberg fast track generative AI innovation and mitigate risk with security best practices.

Jun 12, 2024 • 5 Minute Read

Please set an alt value for this image...
  • AWS
  • Cloud
  • IT Ops
  • Data
  • Security
  • Business
  • AI & Machine Learning

Innovation and security are two integral parts of business that often feel at odds with each other. But they don’t have to be. 

At AWS re:Inforce 2024, Hart Rossman, VP of Security for AWS Sales, Marketing, and Global Services, explained how organizations can balance generative AI adoption and innovation with security and privacy.

Emily Webber, Principal Annapurna Labs ML SA for AWS, and Paul Vachon, Head of Security Architecture for Bloomberg CTO's Office, also took the stage to share generative AI use cases and insights.

Table of contents

Security best practices in the age of generative AI

“Generative AI is not a new technology space,” said Hart. “But the way in which the public was able to interact with it changed.” 

It’s this shift that’s created excitement around generative AI technology. As with any emerging tech, though, it begs the question: Can you move fast and stay secure, or do you have to make a trade-off?

According to Hart, you can do both—if you use security best practices.

Understand your use cases for generative AI

First, understand why you want to use generative AI. “What’s interesting about this new technology space?” asked Hart. “What’s innovative? What’s driving your passion to use it?”

Answering these questions will help you understand how to use AI/ML to bring value to your business. Make sure security leaders, builders, and auditors understand these use cases and can identify the existing mechanisms, like threat modeling and AppSec reviews, needed to build and secure applications with generative AI capabilities.

Use a policy of least privilege for identity and access management (IAM)

To secure generative AI from end to end, apply a policy of least privilege access to training data, models, and applications. 

“Get the fundamentals of identity access management control correct across the workload,” advised Hart.

Collaborate with the data owner

“It’s really important to collaborate with the data owner or data custodian,” said Hart. 

Before you build, speak with the data owner and work together to validate your use case and ensure it’s deployed appropriately.

Focus on end-to-end security operations

As Hart mentioned, the goal of secure innovation is repeatable, scalable success. And that involves incorporating generative AI in end-to-end security operations. Empower employees to plan for and operate security architecture around their generative AI workload.

“Take the whole architecture into account to ensure you’re putting the proper controls in place, that you’re auditing the full infrastructure,” he said.

Mitigating security risks when training AI models: Lessons learned from Bloomberg

When Bloomberg trained their research model, they were conscious of the security risks they needed to address throughout the process. Drawing on that experience, Paul outlined the risks associated with training AI models and gave advice on how to address them.

Prevent data loss

You don’t want to lose or abuse valuable information like training datasets and model weights. You also don’t want to use data you aren’t licensed to use. Know what information you’re using and keep it secure.

Set guardrails to prevent abuse of resources

Be cognizant of the resources models require. Set guardrails and GPU reservations to limit misuse of compute time, runaway experiments, and unexpected bills.

Lean on domain expertise

AI/ML engineers and researchers aren’t security or infrastructure experts. Try to codify the knowledge of your security architects and infrastructure experts into a building block everyone can use. AI/ML researchers don’t have to worry about these details, and everyone can focus on what they’re best at.

Reduce AI model misuse

Who’s using the models? How are they using the models? Are they using the model for the wrong purpose? Define and enforce model use cases, implement IAM practices, and standardize API endpoints.

“If you’re seeing failure, it means you’re up to something interesting,” said Paul. “But it also means you need to iterate rapidly to recover from those failures. . . . Make sure everything you do is repeatable and you don’t keep learning the same lessons over and over again.”

How to secure foundational models using a data science approach

When it comes to securing foundational models (FMs), Emily shared three steps to follow from a data science perspective. 

First, explore the model’s origins. “Where is the model coming from? What are its origins?” she asked. “How was the model aligned with customer preferences? Can that model operate in a low-bias and highly responsible manner? Is it enjoyable to work with?”

Look at the foundational model’s:

  • Training data

  • Source code

  • Alignment

  • Responsibility

  • Overall quality

  • Service security

The second step involves diligence testing. Does the model meet your needs? Is it secure? Does it support GDPR, SOC, ISO, CSA, and other compliance requirements?

The third step is actually landing the solution. This involves model customization and protection. “Once we’ve developed and trained the foundation model, we need to establish guardrails and protection criteria for the model,” said Emily.

It also requires education. “Help the rest of the teams understand the philosophy behind why a foundational model operates the way it does so they can better enable the onboarding of that technology,” she advised.

The future of generative AI according to AWS? SLMs.

So far, much of the generative AI discussion has centered on large language models (LLMs). AWS thinks that will change in the near future.

“Having generalized models that serve very intelligent responses on centralized data sets is exciting,” said Emily. “But it’s not particularly efficient.”

Because of that, she believes more organizations will move towards small language models (SLMs) that solve specific use cases. “[These are] more purpose-built, highly efficient models,” she said.

Hart also thinks the future will hold more specialized generative AI capabilities as organizations transition from individual applications to wider systems. “Generative AI is going the way of operating systems [with] specialized subsystems,” he said. 

Accelerate AI innovation with security fundamentals

“When you’re working on a secure foundation and you’re applying best practices, you can do incredible things,” said Hart.

Check out the rest of our AWS re:Inforce 2024 coverage:

Elevate your security skills with Pluralsight’s skill development platform. Learn more about our partnership with AWS and start a free trial of Pluralsight Skills.

Pluralsight Content Team

Pluralsight C.

The Pluralsight Content Team delivers the latest industry insights, technical knowledge, and business advice. As tech enthusiasts, we live and breathe the industry and are passionate about sharing our expertise. From programming and cloud computing to cybersecurity and AI, we cover a wide range of topics to keep you up to date and ahead of the curve.

More about this author