The correct answer is A because each foundation model on Amazon Bedrock (e.g., Claude, Titan, Mistral, Meta Llama) has a different maximum token limit, which defines the maximum number of tokens the model can accept in the prompt and generate in the response.
From AWS documentation:
"Each model in Amazon Bedrock has its own maximum token limit. Prompts exceeding the limit must be truncated or adjusted depending on the selected model."
Explanation of other options:
B. On-demand inference support is a platform feature that is uniformly supported across models on Bedrock.
C. All Bedrock LLMs support randomness control through temperature and top-p parameters.
D. Amazon Bedrock Guardrails are designed to work across supported models, though specific behaviors may vary slightly.
Referenced AWS AI/ML Documents and Study Guides:
Amazon Bedrock Model Comparison Guide
AWS Prompt Engineering and LLM Deployment Documentation
AWS ML Specialty Study Guide – Bedrock Model Capabilities
Submit