Ensuring AI model resilience against external threats involves validating that the model is configured to resist attacks, such as adversarial inputs, data poisoning, or misuse. The AAIA™ Study Guide emphasizes configuration testing as a crucial control to simulate threat scenarios and assess robustness.
“Model configuration testing simulates real-world threat conditions to validate model resilience. This includes testing against adversarial attacks, input manipulation, and exposure of sensitive outputs.”
While access monitoring (C) and anonymization (A) reduce risks, they don’t actively validate model behavior under threat conditions. Therefore, D offers the most effective resilience measure.
[Reference: ISACA Advanced in AI Audit™ (AAIA™) Study Guide, Section: “AI Governance and Risk Management,” Subsection: “Security and Resilience Testing for AI Models”, ]
Submit