The correct answer is A because Human-in-the-loop (HITL) is a post-processing strategy used to monitor, review, and filter outputs from generative AI models for toxicity, bias, or inappropriate content. It allows human reviewers to approve or reject model responses before they are delivered to end-users, ensuring alignment with ethical guidelines and company policies.
From the AWS documentation:
"Human-in-the-loop (HITL) workflows in generative AI are used to validate and approve outputs of models, especially in applications where content quality, compliance, or harm reduction is critical. HITL is a key step in responsible AI implementations to mitigate hallucinations, bias, and unsafe content."
Explanation of other options:
B. Data augmentation is a pre-processing technique to increase data diversity, not typically used in post-processing stages.
C. Feature engineering is relevant in traditional ML, especially structured data tasks, not typically used in generative AI post-processing.
D. Adversarial training is a model training strategy, not a post-processing mitigation approach.
Referenced AWS AI/ML Documents and Study Guides:
AWS Responsible AI Practices Whitepaper
AWS Generative AI Developer Guide – Human-in-the-loop and Post-processing
Amazon A2I Documentation – Integrating Human Review in ML Workflows
Submit