In Microsoft Azure OpenAI Service and the AI-900/AI-102 study materials, grounding data is the correct term used to describe the process of providing contextual or external information to improve the accuracy, relevance, and quality of responses generated by a generative AI model such as GPT-3.5 or GPT-4.
Grounding is a prompt engineering technique where the AI model is supplemented with relevant background data, such as company documents, knowledge bases, or user context, that helps the model generate factually correct and context-aware responses. Microsoft Learn defines grounding as a way to connect the model’s general knowledge to specific, real-world information. For example, if you ask a GPT-3.5 model about your organization’s HR policies, the base model will not know them unless that policy information is provided (grounded) in the prompt. By embedding this contextual data, the AI becomes “grounded” in the facts it needs to respond reliably.
This technique differs from other prompt engineering concepts:
A. Providing examples (few-shot prompting) shows the model sample inputs and outputs to guide formatting or style, not factual context.
B. Fine-tuning involves retraining the model with labeled data to permanently adjust its behavior — it’s not a prompt-based technique.
D. System messages define the model’s role, tone, or style (for example, “You are a helpful assistant”) but do not add factual context.
Therefore, when you provide contextual information (like product details, policy documents, or reference text) within a prompt to enhance the quality and factual reliability of the model’s responses, you are applying the grounding data technique.
Submit