Allowing AI to autonomously generate code without human review introduces significant risks, including security vulnerabilities, logic errors, and noncompliance with organizational development standards. The AAIA™ Study Guide strongly advocates for human-in-the-loop oversight, particularly in automated development contexts.
“AI-assisted development must include manual code reviews to ensure functionality, compliance, and security. Autonomous code generation without validation increases the risk of introducing undetected flaws.”
While A, B, and C involve operational risks or inefficiencies, only D constitutes a direct breach of secure development life cycle principles.
[Reference: ISACA Advanced in AI Audit™ (AAIA™) Study Guide, Section: “AI Fundamentals and Technologies,” Subsection: “AI in Software Development and Associated Risks”]
Contribute your Thoughts:
Chosen Answer:
This is a voting comment (?). You can switch to a simple comment. It is better to Upvote an existing comment if you don't have anything to add.
Submit