LAB 369 – Defending Python AI Applications from Data and Model Poisoning (NEW)

Course Overview


This lab focuses on detecting and preventing poisoning attacks that alter an LLM’s behavior by introducing malicious training or fine-tuning data. Learners will simulate tampering scenarios and develop robust validation techniques.

After completing this lab, learners will have the knowledge and skill to:

  • Detect model behavior tampering via malicious data
  • Enforce dataset provenance and sandboxing of untrusted sources
  • Validate outputs and implement anomaly monitoring
  • Apply red teaming and secure model training techniques

Looking To Learn More?

Request more information on our courses and labs.

* required


Course Details

Course Number: LAB 369
Course Duration: 5 minutes
Course CPE Credits: 0.1

NICE Work Role Category

Available Languages

  • English