Spring Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: simple70

Google Professional Machine Learning Engineer Professional-Machine-Learning-Engineer Question # 25 Topic 3 Discussion

Google Professional Machine Learning Engineer Professional-Machine-Learning-Engineer Question # 25 Topic 3 Discussion

Professional-Machine-Learning-Engineer Exam Topic 3 Question 25 Discussion:
Question #: 25
Topic #: 3

You have deployed a model on Vertex AI for real-time inference. During an online prediction request, you get an “Out of Memory” error. What should you do?


A.

Use batch prediction mode instead of online mode.


B.

Send the request again with a smaller batch of instances.


C.

Use base64 to encode your data before using it for prediction.


D.

Apply for a quota increase for the number of prediction requests.


Get Premium Professional-Machine-Learning-Engineer Questions

Contribute your Thoughts:


Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.