Hallucinations—instances where a generative model produces factual errors or nonsensical information with high confidence—are primarily caused by " Inadequate data quality " in the training set. If the model is trained on data that is contradictory, incomplete, or contains " noise " (incorrect facts), it fails to learn accurate semantic relationships. The ISACA AAIA™ manual highlights that " Data Cleaning " and " Provenance " are essential to mitigate this. While weak controls (Option C) or poor change management (Option D) can lead to other risks, the mathematical root of the hallucination itself is typically found in the underlying quality and accuracy of the training data.
Contribute your Thoughts:
Chosen Answer:
This is a voting comment (?). You can switch to a simple comment. It is better to Upvote an existing comment if you don't have anything to add.
Submit