According to the Microsoft Azure AI Fundamentals (AI-900) curriculum and Azure OpenAI documentation, the appropriate service for detecting and managing harmful, unsafe, or inappropriate content in text, images, or other generative AI outputs is Azure AI Content Safety.
Azure AI Content Safety is designed to automatically detect potentially harmful material such as hate speech, violence, self-harm, sexual content, or profanity. It ensures that generative AI applications like chatbots, image generators, and content creation tools comply with Microsoft’s Responsible AI principles — specifically Reliability & Safety and Accountability.
This service integrates directly with the Azure OpenAI Service, meaning that when developers build AI solutions using models like GPT-4 or DALL·E, they can use Content Safety to filter and moderate both input prompts and model outputs. This protects users from unsafe or offensive content generation.
Let’s analyze why the other options are incorrect:
A. Face – The Face service detects and analyzes human faces in images or videos. It is unrelated to moderating harmful textual or generative content.
B. Video Analysis – This service analyzes video streams to detect objects, actions, or events but not inappropriate or harmful text or imagery from AI models.
C. Language – The Azure AI Language service focuses on text understanding tasks like sentiment analysis, entity recognition, and translation, not content safety filtering.
Therefore, per Microsoft Learn’s official AI-900 guidance, when identifying or filtering harmful content in a generative AI solution built with Azure OpenAI, the correct and verified service to use is Azure AI Content Safety.
Submit