Model poisoning occurs when an attacker manipulates the training data or the training process of an AI model so that its predictions are deliberately inaccurate or biased. In the SecurityX CAS-005 objectives, this is part of understanding emerging technology threats, specifically AI/ML vulnerabilities. This differs from:
Social engineering, which manipulates humans rather than AI models.
Output handling, which deals with how outputs are processed but doesn’t cause inaccuracy at the model level.
Prompt injections, which manipulate the model at query time, not during training.Because model poisoning directly corrupts the AI model itself, it is the clearest reason AI outputs could be inaccurate.
Contribute your Thoughts:
Chosen Answer:
This is a voting comment (?). You can switch to a simple comment. It is better to Upvote an existing comment if you don't have anything to add.
Submit