Simple play icon Course
Skills Expanded

Protecting Online Communities with Azure AI Content Safety

by Janani Ravi

Explore the essentials of Azure AI Content Safety. This course will teach you how to moderate text and image content, and detect harmful or inappropriate material using advanced filtering and prompt shields.

What you'll learn

Managing user-generated content to ensure it is safe and appropriate is a critical challenge for many online platforms. Ensuring that content does not contain harmful, violent, or inappropriate material is essential for maintaining a safe and welcoming environment for all users.

In this course, Protecting Online Communities with Azure AI Content Safety, you’ll learn to effectively moderate and manage content using Azure AI tools. First, you’ll explore how to create and log in to the AI Content Safety Studio Instance.

Next, you’ll discover how to perform text content moderation, including filtering content based on thresholds for hate, violence, sexual content, and self-harm, as well as screening for specific terms using blocklists.

Finally, you’ll learn how to moderate image content and use prompt shields to detect indirect attacks like jailbreaks and prompt injections.

When you’re finished with this course, you’ll have the skills and knowledge of Azure AI Content Safety needed to ensure your platform’s content is safe, compliant, and welcoming for all users.

About the author

Janani has a Masters degree from Stanford and worked for 7+ years at Google. She was one of the original engineers on Google Docs and holds 4 patents for its real-time collaborative editing framework. After spending years working in tech in the Bay Area, New York, and Singapore at companies such as Microsoft, Google, and Flipkart, Janani finally decided to combine her love for technology with her passion for teaching. She is now the co-founder of Loonycorn, a content studio focused on providing ... more

Ready to upskill? Get started