AAISM identifies output review and annotation as the most practical compensating control when robust input validation cannot be applied.
Output moderation detects:
• maliciously influenced responses
• unsafe outputs
• security-policy violations
IAM (B) does not mitigate prompt injection itself. Human review of inputs (C) is unrealistic at scale. Fine-tuning (A) cannot guarantee full prevention.
[References: AAISM Study Guide – Generative AI Safeguards; Output Moderation Controls., ============================================, ]
Contribute your Thoughts:
Chosen Answer:
This is a voting comment (?). You can switch to a simple comment. It is better to Upvote an existing comment if you don't have anything to add.
Submit