AAISM defines data poisoning as the intentional manipulation of training data so that the learned model behaves incorrectly (e.g., skewed lending approvals/denials) while appearing valid. The correct simulation in red-team exercises is to corrupt or seed the training set with adversarial examples or mislabeled records to induce biased or erroneous decision boundaries. Encrypting inputs (A) is unrelated; output noise (B) describes perturbation of predictions, not training; model weight theft (C) is model extraction, not poisoning.
[References: AI Security Management™ (AAISM) Body of Knowledge — Adversarial ML Threats; Data Poisoning and Training-Time Attacks. AAISM Study Guide — Red-Team Methods for AI; Poisoning vs. Evasion vs. Model Extraction; Controls and Testing for Safety-Critical Decisions., ]
Contribute your Thoughts:
Chosen Answer:
This is a voting comment (?). You can switch to a simple comment. It is better to Upvote an existing comment if you don't have anything to add.
Submit