The correct answer is C – Test the AI solution to ensure that it does not discriminate against any protected groups. According to AWS Responsible AI principles, fairness and bias mitigation are essential when AI is used for high-impact decisions such as hiring. AWS documentation emphasizes evaluating datasets, model outputs, and demographic performance to ensure that AI systems do not reinforce or reproduce discriminatory patterns. Services such as Amazon SageMaker Clarify support automated bias detection and explainability, helping teams identify and mitigate unwanted correlations in training data or model predictions. Option A violates AWS guidance, as human-in-the-loop review is required for sensitive decisions. Option B risks amplifying historical bias because training on only “successful” hires can create feedback loops. Option D contradicts transparency principles, which AWS states are crucial for accountability in regulated or ethical decision-making domains. Therefore, rigorous fairness testing aligns with AWS’s recommended practices for responsible AI in hiring workflows.
Referenced AWS Documentation:
AWS Responsible AI Whitepaper – Fairness and Bias Mitigation
Amazon SageMaker Clarify Documentation
Contribute your Thoughts:
Chosen Answer:
This is a voting comment (?). You can switch to a simple comment. It is better to Upvote an existing comment if you don't have anything to add.
Submit