Comprehensive and Detailed Explanation From Agentic AI Business Solutions Topics:
The correct answer is C. Implement version control for all the AI system components .
This question is not only about model approval. It is about creating a deployment process that allows the organization to:
review every release before production
compare current and prior versions
evaluate the impact of changes
improve business continuity if a deployment introduces risk
That makes version control for all AI system components the strongest answer.
Why C is correct
The requirement says the security and compliance team must have access to prior versions to determine exposures introduced by each release. That means the organization must be able to track, compare, and potentially roll back not just the model itself, but the broader AI solution over time.
In real enterprise AI deployments, “AI system components” usually include:
models
prompts
orchestration logic
configuration files
policies
connectors
inference code
evaluation assets
deployment definitions
If only the model is versioned, the team may miss exposure introduced by surrounding components. For example:
a prompt change could create unsafe outputs
a policy/configuration change could expose sensitive data
an orchestration update could alter transaction behavior
a connector change could affect compliance boundaries
That is why full AI system version control is the best answer. It gives security and compliance teams complete visibility into what changed across releases.
It also enhances business continuity because version control supports:
rollback to known-good versions
change auditing
release comparison
traceability
controlled recovery from faulty deployments
From an agentic AI business solutions perspective, this is the most robust governance pattern because AI outcomes are rarely determined by the model alone. They are determined by the entire solution stack.
Why the other options are less appropriate
A. Create a central model registry that uses version history
A model registry is useful, and version history helps, but this option is too narrow. The question asks about evaluating the impact of each deployment and enhancing business continuity. In enterprise AI systems, impact is often caused by more than just the model artifact. A model registry does not necessarily capture all surrounding components that affect production behavior.
B. Establish a promotion process by using a quality gate
A quality gate is valuable for approval workflows, but it does not by itself satisfy the need for deep access to prior versions across the system. It controls promotion, but it does not fully provide historical traceability and rollback coverage for all AI system components.
D. Track model retirement schedules to prevent service disruptions
This may support lifecycle planning, but it does not address the core requirement of comparing releases, reviewing prior versions, and evaluating exposure introduced by each deployment.
Expert reasoning
This question combines three ideas:
When those appear together, the strongest answer is typically the one that provides end-to-end traceability and rollback across the whole solution , not just a single artifact.
That is why version control for all AI system components is the best recommendation.
So the correct choice is:
Answer: C
Submit