This course provides foundational knowledge and skills necessary to secure AI systems. Participants will explore the importance of AI security, key concepts in AI and cybersecurity, and practical approaches for threat modeling, defending against adversarial attacks, and securing AI models and data. Through theoretical knowledge, hands-on exercises, and real-world case studies, this course equips participants with the skills needed to identify, analyze, and mitigate security risks in AI systems.
Prerequisites
In order to succeed in this course, you will need:
- Familiarity with fundamental AI concepts.
- Experience with a programming language like Python or R.
- Awareness of basic cybersecurity principles.
- Practical experience in data analysis.
Purpose
| Learn the skills needed to identify, analyze, and mitigate security risks in AI systems. |
Audience
| Engineers and Developers involved with the design and implementation of AI/GenAI applications and systems. |
Role
| Software Engineers | Developers | Data Engineers | Technical Managers |
Skill level
| Beginner |
Style
| Lecture | Hands-on Activities| Case Studies |
Duration
| 3 days |
Related technologies
| Artificial Intelligence | GenAI |Python | R |
Course objectives
- Describe foundational AI concepts, including machine learning, deep learning, and neural networks.
- Discuss the importance of AI security and identify common security challenges in AI systems.
- Apply cybersecurity principles to AI, including threat modeling and defense mechanisms against adversarial attacks.
- Apply best practices for securing AI models and data, including encryption, access control, and secure deployment.
- Discuss strategies for securing generative AI systems, including RAG and chat agents, with role-based access control.