AWS documentation identifies hallucinations as a known limitation of generative AI models, particularly when used in business and production environments. Hallucinations occur when a model generates outputs that are unrelated, incorrect, fabricated, or unsupported by the input data or provided context. These outputs often appear confident and fluent, which can make them difficult to detect without additional validation.
Generative AI models, including large language models, operate using probabilistic token prediction based on patterns learned during training. AWS explains that these models do not have true reasoning or factual grounding unless explicitly provided with context or external knowledge. As a result, when prompts are ambiguous, incomplete, or outside the model’s training distribution, the model may produce responses that are irrelevant or misleading.
This behavior presents a risk for business use cases such as customer support, reporting, or decision-making systems. AWS highlights hallucinations as a key challenge that must be mitigated through techniques such as Retrieval Augmented Generation (RAG), prompt engineering, human review, and output validation.
The other options are not correct. Interpretability refers to the ability to understand model decisions, not incorrect outputs. Data bias relates to skewed or unfair training data. Nondeterminism refers to variability in outputs, not relevance or correctness.
AWS consistently categorizes hallucinations as a primary disadvantage of generative AI, making this the correct answer.
Submit