Statement 1: Human-in-the-loop practices provide accountability for AI-generated decisions. = Yes
Microsoft responsible AI guidance recommends keeping a human in the loop, maintaining human oversight, and ensuring humans have a role in decisions based on model output.
Statement 2: Deploying an AI system to a production environment eliminates the need for ongoing monitoring. = No
Microsoft guidance states that after deployment, an AI-powered product or feature requires ongoing monitoring and improvement .
Statement 3: Disclosing the team that designed and deployed an AI system provides accountability for the system’s output. = Yes
Accountability in responsible AI means people and organizations remain responsible for AI systems and their effects. Identifying the people or team responsible for designing and deploying the system supports accountability and governance.
Contribute your Thoughts:
Chosen Answer:
This is a voting comment (?). You can switch to a simple comment. It is better to Upvote an existing comment if you don't have anything to add.
Submit