Evasion attacks manipulate inputs to induce misclassification while leaving the model unchanged. AAISM prescribes adversarial robustness controls, with adversarial training as a primary measure: incorporate adversarially perturbed examples into training/validation to harden decision boundaries and improve resilience across threat models (e.g., Lp-bounded perturbations). Monitoring (A) is detective, not preventive. Restricting parameter access (C) protects confidentiality but does not mitigate input-space attacks. Differential privacy (D) addresses training data leakage, not robustness to adversarial inputs.
[References:AI Security Management™ (AAISM) Body of Knowledge: Adversarial ML—Evasion vs. Poisoning; Robustness and Resilience Controls; Adversarial Training.AAISM Study Guide: Model Hardening Techniques; Evaluation of Robust Accuracy; Security Testing with Adversarial Examples., ===========]
Contribute your Thoughts:
Chosen Answer:
This is a voting comment (?). You can switch to a simple comment. It is better to Upvote an existing comment if you don't have anything to add.
Submit