Amazon Web Services AWS Certified AI Practitioner Exam AIF-C01 Question # 9 Topic 1 Discussion
AIF-C01 Exam Topic 1 Question 9 Discussion:
Question #: 9
Topic #: 1
A company wants to generate synthetic data responses for multiple prompts from a large volume of data. The company wants to use an API method to generate the responses. The company does not need to generate the responses immediately.
A.
Input the prompts into the model. Generate responses by using real-time inference.
B.
Use Amazon Bedrock batch inference. Generate responses asynchronously.
C.
Use Amazon Bedrock agents. Build an agent system to process the prompts recursively.
D.
Use AWS Lambda functions to automate the task. Submit one prompt after another and store each response.
The correct answer is B – Use Amazon Bedrock batch inference, which allows asynchronous generation of large-scale model outputs through APIs without requiring low-latency performance. According to AWS Bedrock documentation, batch inference is ideal for high-volume workloads that can tolerate delay, such as bulk content generation or summarization jobs. Unlike real-time inference, it processes requests in bulk, reducing cost and operational load. AWS handles the queuing, processing, and scaling automatically. Bedrock Agents (option C) are for workflow orchestration, not large-scale generation. AWS Lambda (option D) can automate tasks but is not optimized for high-volume LLM calls. Batch inference provides cost efficiency, scalability, and simplicity for delayed, asynchronous generation needs.
AWS ML Specialty Study Guide – Scalable Inference Options
Contribute your Thoughts:
Chosen Answer:
This is a voting comment (?). You can switch to a simple comment. It is better to Upvote an existing comment if you don't have anything to add.
Submit