LAB 367 – Defending Python AI Applications from Sensitive Information Disclosure (NEW)
Course Overview
This lab examines how LLMs may inadvertently leak confidential or personal data through generated outputs. Learners will test for sensitive data exposure and apply strategies to limit access and enforce data protection practices.
After completing this lab, learners will have the knowledge and skill to:
- Detect unintentional information leakage in LLM outputs
- Apply input sanitization and access controls
- Enforce data minimization and privacy-preserving techniques
- Align practices with GDPR and NIST privacy controls
Looking To Learn More?
Request more information on our courses and labs.
* required

Course Details
Course Number: LAB 367
Course Duration: 5 minutes
Course CPE Credits: 0.1
NICE Work Role Category
Available Languages
- English