Comprehensive and Detailed Explanation (AWS AI documents):
AWS Responsible AI best practices emphasize that bias should be detected, measured, mitigated, and monitored throughout the ML lifecycle, especially for sensitive domains such as healthcare. When biased outcomes are observed, AWS guidance recommends addressing bias at the data and model level, not only at the output level.
Using Amazon SageMaker Clarify aligns directly with AWS Responsible AI principles because it is designed to:
Detect and quantify bias in datasets and model predictions across sensitive attributes such as demographic groups
Provide pre-training and post-training bias metrics, allowing practitioners to identify where bias originates
Support data-centric mitigation, including improving dataset balance and representativeness
After identifying bias with SageMaker Clarify, collecting additional balanced training data and retraining the model helps ensure that:
The model learns from a more representative dataset
Disparate treatment recommendations based on demographics are reduced
Fairness is improved while maintaining clinical accuracy
Why the other options are not sufficient or aligned with AWS best practices:
B. Prompt engineering can influence outputs but does not address underlying data or model bias and is not sufficient for regulated, high-risk domains like healthcare.
C. Content filtering removes outputs after generation but does not prevent biased decision-making by the model itself.
D. Separate FM endpoints by demographic group increases the risk of reinforcing bias and violates fairness principles rather than mitigating them.
AWS AI Study Guide References:
AWS Responsible AI principles: Fairness and Governance
Amazon SageMaker Clarify: bias detection and mitigation
AWS best practices for ML in high-risk domains such as healthcare
Submit