Month End Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: simple70

Pass the Amazon Web Services AWS Certified Associate MLA-C01 Questions and answers with CertsForce

Viewing page 2 out of 7 pages
Viewing questions 11-20 out of questions
Questions # 11:

A company has trained an ML model in Amazon SageMaker. The company needs to host the model to provide inferences in a production environment.

The model must be highly available and must respond with minimum latency. The size of each request will be between 1 KB and 3 MB. The model will receive unpredictable bursts of requests during the day. The inferences must adapt proportionally to the changes in demand.

How should the company deploy the model into production to meet these requirements?

Options:

A.

Create a SageMaker real-time inference endpoint. Configure auto scaling. Configure the endpoint to present the existing model.


B.

Deploy the model on an Amazon Elastic Container Service (Amazon ECS) cluster. Use ECS scheduled scaling that is based on the CPU of the ECS cluster.


C.

Install SageMaker Operator on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. Deploy the model in Amazon EKS. Set horizontal pod auto scaling to scale replicas based on the memory metric.


D.

Use Spot Instances with a Spot Fleet behind an Application Load Balancer (ALB) for inferences. Use the ALBRequestCountPerTarget metric as the metric for auto scaling.


Expert Solution
Questions # 12:

An ML engineer develops a neural network model to predict whether customers will continue to subscribe to a service. The model performs well on training data. However, the accuracy of the model decreases significantly on evaluation data.

The ML engineer must resolve the model performance issue.

Which solution will meet this requirement?

Options:

A.

Penalize large weights by using L1 or L2 regularization.


B.

Remove dropout layers from the neural network.


C.

Train the model for longer by increasing the number of epochs.


D.

Capture complex patterns by increasing the number of layers.


Expert Solution
Questions # 13:

A company is developing an ML model for a customer. The training data is stored in an Amazon S3 bucket in the customer's AWS account (Account A). The company runs Amazon SageMaker AI training jobs in a separate AWS account (Account B).

The company defines an S3 bucket policy and an IAM policy to allow reads to the S3 bucket.

Which additional steps will meet the cross-account access requirement?

Options:

A.

Create the S3 bucket policy in Account A. Attach the IAM policy to an IAM role that SageMaker AI uses in Account A.


B.

Create the S3 bucket policy in Account A. Attach the IAM policy to an IAM role that SageMaker AI uses in Account B.


C.

Create the S3 bucket policy in Account B. Attach the IAM policy to an IAM role that SageMaker AI uses in Account A.


D.

Create the S3 bucket policy in Account B. Attach the IAM policy to an IAM role that SageMaker AI uses in Account B.


Expert Solution
Questions # 14:

A company uses a batching solution to process daily analytics. The company wants to provide near real-time updates, use open-source technology, and avoid managing or scaling infrastructure.

Which solution will meet these requirements?

Options:

A.

Create Amazon Managed Streaming for Apache Kafka (Amazon MSK) Serverless clusters.


B.

Create Amazon MSK Provisioned clusters.


C.

Create Amazon Kinesis Data Streams with Application Auto Scaling.


D.

Create self-hosted Apache Flink applications on Amazon EC2.


Expert Solution
Questions # 15:

A company wants to predict the success of advertising campaigns by considering the color scheme of each advertisement. An ML engineer is preparing data for a neural network model. The dataset includes color information as categorical data.

Which technique for feature engineering should the ML engineer use for the model?

Options:

A.

Apply label encoding to the color categories. Automatically assign each color a unique integer.


B.

Implement padding to ensure that all color feature vectors have the same length.


C.

Perform dimensionality reduction on the color categories.


D.

One-hot encode the color categories to transform the color scheme feature into a binary matrix.


Expert Solution
Questions # 16:

An ML engineer is using an Amazon SageMaker Studio notebook to train a neural network by creating an estimator. The estimator runs a Python training script that uses Distributed Data Parallel (DDP) on a single instance that has more than one GPU.

The ML engineer discovers that the training script is underutilizing GPU resources. The ML engineer must identify the point in the training script where resource utilization can be optimized.

Which solution will meet this requirement?

Options:

A.

Use Amazon CloudWatch metrics to create a report that describes GPU utilization over time.


B.

Add SageMaker Profiler annotations to the training script. Run the script and generate a report from the results.


C.

Use AWS CloudTrail to create a report that describes GPU utilization and GPU memory utilization over time.


D.

Create a default monitor in Amazon SageMaker Model Monitor and suggest a baseline. Generate a report based on the constraints and statistics the monitor generates.


Expert Solution
Questions # 17:

An ML engineer normalized training data by using min-max normalization in AWS Glue DataBrew. The ML engineer must normalize the production inference data in the same way as the training data before passing the production inference data to the model for predictions.

Which solution will meet this requirement?

Options:

A.

Apply statistics from a well-known dataset to normalize the production samples.


B.

Keep the min-max normalization statistics from the training set. Use these values to normalize the production samples.


C.

Calculate a new set of min-max normalization statistics from a batch of production samples. Use these values to normalize all the production samples.


D.

Calculate a new set of min-max normalization statistics from each production sample. Use these values to normalize all the production samples.


Expert Solution
Questions # 18:

An ML engineer is tuning an image classification model that performs poorly on one of two classes. The poorly performing class represents an extremely small fraction of the training dataset.

Which solution will improve the model’s performance?

Options:

A.

Optimize for accuracy. Use image augmentation on the less common images.


B.

Optimize for F1 score. Use image augmentation on the less common images.


C.

Optimize for accuracy. Use SMOTE to generate synthetic images.


D.

Optimize for F1 score. Use SMOTE to generate synthetic images.


Expert Solution
Questions # 19:

A company is gathering audio, video, and text data in various languages. The company needs to use a large language model (LLM) to summarize the gathered data that is in Spanish.

Which solution will meet these requirements in the LEAST amount of time?

Options:

A.

Train and deploy a model in Amazon SageMaker to convert the data into English text. Train and deploy an LLM in SageMaker to summarize the text.


B.

Use Amazon Transcribe and Amazon Translate to convert the data into English text. Use Amazon Bedrock with the Jurassic model to summarize the text.


C.

Use Amazon Rekognition and Amazon Translate to convert the data into English text. Use Amazon Bedrock with the Anthropic Claude model to summarize the text.


D.

Use Amazon Comprehend and Amazon Translate to convert the data into English text. Use Amazon Bedrock with the Stable Diffusion model to summarize the text.


Expert Solution
Questions # 20:

Case study

An ML engineer is developing a fraud detection model on AWS. The training dataset includes transaction logs, customer profiles, and tables from an on-premises MySQL database. The transaction logs and customer profiles are stored in Amazon S3.

The dataset has a class imbalance that affects the learning of the model's algorithm. Additionally, many of the features have interdependencies. The algorithm is not capturing all the desired underlying patterns in the data.

Before the ML engineer trains the model, the ML engineer must resolve the issue of the imbalanced data.

Which solution will meet this requirement with the LEAST operational effort?

Options:

A.

Use Amazon Athena to identify patterns that contribute to the imbalance. Adjust the dataset accordingly.


B.

Use Amazon SageMaker Studio Classic built-in algorithms to process the imbalanced dataset.


C.

Use AWS Glue DataBrew built-in features to oversample the minority class.


D.

Use the Amazon SageMaker Data Wrangler balance data operation to oversample the minority class.


Expert Solution
Viewing page 2 out of 7 pages
Viewing questions 11-20 out of questions