Month End Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: simple70

Amazon Web Services AWS Certified Machine Learning Engineer - Associate MLA-C01 Question # 11 Topic 2 Discussion

Amazon Web Services AWS Certified Machine Learning Engineer - Associate MLA-C01 Question # 11 Topic 2 Discussion

MLA-C01 Exam Topic 2 Question 11 Discussion:
Question #: 11
Topic #: 2

A company has trained an ML model in Amazon SageMaker. The company needs to host the model to provide inferences in a production environment.

The model must be highly available and must respond with minimum latency. The size of each request will be between 1 KB and 3 MB. The model will receive unpredictable bursts of requests during the day. The inferences must adapt proportionally to the changes in demand.

How should the company deploy the model into production to meet these requirements?


A.

Create a SageMaker real-time inference endpoint. Configure auto scaling. Configure the endpoint to present the existing model.


B.

Deploy the model on an Amazon Elastic Container Service (Amazon ECS) cluster. Use ECS scheduled scaling that is based on the CPU of the ECS cluster.


C.

Install SageMaker Operator on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. Deploy the model in Amazon EKS. Set horizontal pod auto scaling to scale replicas based on the memory metric.


D.

Use Spot Instances with a Spot Fleet behind an Application Load Balancer (ALB) for inferences. Use the ALBRequestCountPerTarget metric as the metric for auto scaling.


Get Premium MLA-C01 Questions

Contribute your Thoughts:


Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.