AAISM materials emphasize that the most effective preventive safeguard is to ensure input sanitization. Preventive controls stop malicious or malformed inputs from reaching the model in the first place, thereby reducing the likelihood of prompt injection, evasion, or poisoning at inference time. Model output monitoring is a detective control, not preventive. Penetration testing is an assessment technique rather than a safeguard. Differential privacy protects data privacy but does not prevent adversarial input manipulation. Therefore, the most important preventive safeguard in a new AI product is robust input sanitization.
[References:, AAISM Study Guide – AI Technologies and Controls (Preventive vs. Detective Safeguards), ISACA AI Security Management – Input Validation in AI Systems, ]
Contribute your Thoughts:
Chosen Answer:
This is a voting comment (?). You can switch to a simple comment. It is better to Upvote an existing comment if you don't have anything to add.
Submit