In Microsoft’s Responsible AI principles, Fairness ensures that AI systems treat all individuals and groups equitably and make unbiased decisions. The AI-900 study guide explicitly states that fairness is violated when an AI model produces outcomes that systematically favor one group over another — such as preferring a particular gender, race, or age group.
In this scenario, a loan approval system shows gender bias, meaning it approves or rejects applications differently based on gender. This directly contradicts the fairness principle, as the AI system must make decisions solely based on relevant financial attributes (e.g., credit score, income) rather than personal characteristics.
Other principles explained in the AI-900 course include:
Accountability: Ensures human oversight and responsibility.
Transparency: Ensures users understand how decisions are made.
Reliability and Safety: Ensures consistent and accurate operation.
Since gender bias undermines equitable treatment, the principle violated is Fairness.
Contribute your Thoughts:
Chosen Answer:
This is a voting comment (?). You can switch to a simple comment. It is better to Upvote an existing comment if you don't have anything to add.
Submit