AAISM identifies continuous monitoring of AI outputs—especially generative outputs—as the most effective security control, ensuring that violations, unsafe responses, data leakage, and policy-breaking behavior are detected and corrected.
A kill switch (C) is a last-resort measure, not a primary control. Exceeding benchmarks (A) does not ensure relevance. Validating training data (D) is important but insufficient for generative output risks.
[References: AAISM Study Guide – Generative AI Security Controls; Output Monitoring and Policy Alignment., , , ]
Contribute your Thoughts:
Chosen Answer:
This is a voting comment (?). You can switch to a simple comment. It is better to Upvote an existing comment if you don't have anything to add.
Submit