The greatest concern is that the generative model may hallucinate, producing incorrect facts or conclusions (option B). In an audit context, hallucinations can create false statements about control effectiveness, misreport risks, or incorrectly summarize evidence.
AAIA stresses that auditors must maintain professional skepticism and validate AI-generated content. Misstatements are high-risk because they undermine audit credibility, regulatory compliance, and organizational decision-making.
Formatting inconsistency (C) and generic language (D) are cosmetic issues. Outdated information (A) is a concern but does not inherently create false conclusions.
Hallucinated misinformation is the most severe and dangerous issue in AI-generated audit reporting.
[References:, AAIA Domain 3: AI in Audit Processes (accuracy of AI outputs, hallucination risks)., AAIA Domain 5: Ethical Responsibilities in AI-Assisted Work., ]
Submit