The scenario describes a model producing plausible-sounding content that is factually wrong —a common generative AI failure mode often referred to as a “hallucination.” Since “hallucination” is not offered in the dropdown, the best matching choice is model inaccuracy because the core problem is that the model’s output is incorrect even though it appears confident and coherent.
The other options do not fit the definition of the behavior: data leakage is about sensitive information being exposed (for example, proprietary prompts, secrets, or personal data). Prompt injection is an attack technique where a user tries to override system instructions or cause unsafe actions. Overreliance describes a human/organizational risk —trusting the model too much—rather than the model’s intrinsic behavior of generating incorrect facts. Overreliance can be a consequence of this behavior, but it is not what the behavior itself is called.
In practice, you mitigate this kind of inaccuracy by grounding responses in trusted sources (for example, RAG), constraining prompts with explicit requirements, using verification steps (citations, cross-checking, tool-based validation), and adding human review for high-impact use cases.
Contribute your Thoughts:
Chosen Answer:
This is a voting comment (?). You can switch to a simple comment. It is better to Upvote an existing comment if you don't have anything to add.
Submit