Google Professional Machine Learning Engineer Professional-Machine-Learning-Engineer Question # 50 Topic 6 Discussion

Google Professional Machine Learning Engineer Professional-Machine-Learning-Engineer Question # 50 Topic 6 Discussion

Professional-Machine-Learning-Engineer Exam Topic 6 Question 50 Discussion:
Question #: 50
Topic #: 6

You work as an ML researcher at an investment bank and are experimenting with the Gemini large language model (LLM). You plan to deploy the model for an internal use case and need full control of the model’s underlying infrastructure while minimizing inference time. Which serving configuration should you use for this task?


A.

Deploy the model on a Vertex AI endpoint using one-click deployment in Model Garden.


B.

Deploy the model on a Google Kubernetes Engine (GKE) cluster manually by creating a custom YAML manifest.


C.

Deploy the model on a Vertex AI endpoint manually by creating a custom inference container.


D.

Deploy the model on a Google Kubernetes Engine (GKE) cluster using the deployment options in Model Garden.


Get Premium Professional-Machine-Learning-Engineer Questions

Contribute your Thoughts:


Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.