Security in AI: Defending AI Models from Adversarial Attacks

Safe deployment of ML-based cyber-physical systems requires robust protection against adversarial manipulation. In this demo, we'll show examples of adversarial-type inputs and how simulation toolkits developed under DARPA's Guaranteeing AI Robustness Against Deception (GARD) program can help developers evaluate the robustness of their models before deployment in safety or security-critical environments.