The correct answer is D, model disgorgement. This technique refers to removing or eliminating the influence of improperly obtained, biased, or unlawfully used data from a trained machine learning model. It is increasingly discussed in AI governance and regulatory enforcement, particularly where models have been trained on data collected without proper consent or in violation of legal requirements. Instead of merely cleaning datasets, model disgorgement may require retraining or discarding the model entirely to ensure that problematic data no longer influences outputs. This aligns with accountability and compliance principles in AI governance, where organizations must ensure lawful data use throughout the AI lifecycle. Other options like data cleansing and de-duplication address data quality but do not fully remove learned patterns already embedded in trained models.
Contribute your Thoughts:
Chosen Answer:
This is a voting comment (?). You can switch to a simple comment. It is better to Upvote an existing comment if you don't have anything to add.
Submit