Pass the Databricks Generative AI Engineer Databricks-Generative-AI-Engineer-Associate Questions and answers with CertsForce

Viewing page 1 out of 2 pages
Viewing questions 1-10 out of questions
Questions # 1:

A Generative AI Engineer is developing a chatbot designed to assist users with insurance-related queries. The chatbot is built on a large language model (LLM) and is conversational. However, to maintain the chatbot’s focus and to comply with company policy, it must not provide responses to questions about politics. Instead, when presented with political inquiries, the chatbot should respond with a standard message:

“Sorry, I cannot answer that. I am a chatbot that can only answer questions around insurance.”

Which framework type should be implemented to solve this?

Options:

A.

Safety Guardrail


B.

Security Guardrail


C.

Contextual Guardrail


D.

Compliance Guardrail


Expert Solution
Questions # 2:

A Generative Al Engineer has built an LLM-based system that will automatically translate user text between two languages. They now want to benchmark multiple LLM's on this task and pick the best one. They have an evaluation set with known high quality translation examples. They want to evaluate each LLM using the evaluation set with a performant metric.

Which metric should they choose for this evaluation?

Options:

A.

ROUGE metric


B.

BLEU metric


C.

NDCG metric


D.

RECALL metric


Expert Solution
Questions # 3:

A Generative Al Engineer is working with a retail company that wants to enhance its customer experience by automatically handling common customer inquiries. They are working on an LLM-powered Al solution that should improve response times while maintaining a personalized interaction. They want to define the appropriate input and LLM task to do this.

Which input/output pair will do this?

Options:

A.

Input: Customer reviews; Output Group the reviews by users and aggregate per-user average rating, then respond


B.

Input: Customer service chat logs; Output Group the chat logs by users, followed by summarizing each user's interactions, then respond


C.

Input: Customer service chat logs; Output: Find the answers to similar questions and respond with a summary


D.

Input: Customer reviews: Output Classify review sentiment


Expert Solution
Questions # 4:

A Generative Al Engineer interfaces with an LLM with prompt/response behavior that has been trained on customer calls inquiring about product availability. The LLM is designed to output “In Stock” if the product is available or only the term “Out of Stock” if not.

Which prompt will work to allow the engineer to respond to call classification labels correctly?

Options:

A.

Respond with “In Stock” if the customer asks for a product.


B.

You will be given a customer call transcript where the customer asks about product availability. The outputs are either “In Stock” or “Out of Stock”. Format the output in JSON, for example: {“call_id”: “123”, “label”: “In Stock”}.


C.

Respond with “Out of Stock” if the customer asks for a product.


D.

You will be given a customer call transcript where the customer inquires about product availability. Respond with “In Stock” if the product is available or “Out of Stock” if not.


Expert Solution
Questions # 5:

A Generative Al Engineer wants their (inetuned LLMs in their prod Databncks workspace available for testing in their dev workspace as well. All of their workspaces are Unity Catalog enabled and they are currently logging their models into the Model Registry in MLflow.

What is the most cost-effective and secure option for the Generative Al Engineer to accomplish their gAi?

Options:

A.

Use an external model registry which can be accessed from all workspaces


B.

Setup a script to export the model from prod and import it to dev.


C.

Setup a duplicate training pipeline in dev, so that an identical model is available in dev.


D.

Use MLflow to log the model directly into Unity Catalog, and enable READ access in the dev workspace to the model.


Expert Solution
Questions # 6:

A Generative Al Engineer is ready to deploy an LLM application written using Foundation Model APIs. They want to follow security best practices for production scenarios

Which authentication method should they choose?

Options:

A.

Use an access token belonging to service principals


B.

Use a frequently rotated access token belonging to either a workspace user or a service principal


C.

Use OAuth machine-to-machine authentication


D.

Use an access token belonging to any workspace user


Expert Solution
Questions # 7:

A Generative AI Engineer is building a RAG application that will rely on context retrieved from source documents that are currently in PDF format. These PDFs can contain both text and images. They want to develop a solution using the least amount of lines of code.

Which Python package should be used to extract the text from the source documents?

Options:

A.

flask


B.

beautifulsoup


C.

unstructured


D.

numpy


Expert Solution
Questions # 8:

A Generative AI Engineer is developing an LLM application that users can use to generate personalized birthday poems based on their names.

Which technique would be most effective in safeguarding the application, given the potential for malicious user inputs?

Options:

A.

Implement a safety filter that detects any harmful inputs and ask the LLM to respond that it is unable to assist


B.

Reduce the time that the users can interact with the LLM


C.

Ask the LLM to remind the user that the input is malicious but continue the conversation with the user


D.

Increase the amount of compute that powers the LLM to process input faster


Expert Solution
Questions # 9:

A Generative Al Engineer is using an LLM to classify species of edible mushrooms based on text descriptions of certain features. The model is returning accurate responses in testing and the Generative Al Engineer is confident they have the correct list of possible labels, but the output frequently contains additional reasoning in the answer when the Generative Al Engineer only wants to return the label with no additional text.

Which action should they take to elicit the desired behavior from this LLM?

Options:

A.

Use few snot prompting to instruct the model on expected output format


B.

Use zero shot prompting to instruct the model on expected output format


C.

Use zero shot chain-of-thought prompting to prevent a verbose output format


D.

Use a system prompt to instruct the model to be succinct in its answer


Expert Solution
Questions # 10:

A Generative AI Engineer is designing a RAG application for answering user questions on technical regulations as they learn a new sport.

What are the steps needed to build this RAG application and deploy it?

Options:

A.

Ingest documents from a source –> Index the documents and saves to Vector Search –> User submits queries against an LLM –> LLM retrieves relevant documents –> Evaluate model –> LLM generates a response –> Deploy it using Model Serving


B.

Ingest documents from a source –> Index the documents and save to Vector Search –> User submits queries against an LLM –> LLM retrieves relevant documents –> LLM generates a response -> Evaluate model –> Deploy it using Model Serving


C.

Ingest documents from a source –> Index the documents and save to Vector Search –> Evaluate model –> Deploy it using Model Serving


D.

User submits queries against an LLM –> Ingest documents from a source –> Index the documents and save to Vector Search –> LLM retrieves relevant documents –> LLM generates a response –> Evaluate model –> Deploy it using Model Serving


Expert Solution
Viewing page 1 out of 2 pages
Viewing questions 1-10 out of questions