Spring Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: simple70

Pass the Databricks Generative AI Engineer Databricks-Generative-AI-Engineer-Associate Questions and answers with CertsForce

Viewing page 1 out of 3 pages
Viewing questions 1-10 out of questions
Questions # 1:

A Generative AI Engineer is tasked with deploying an application that takes advantage of a custom MLflow Pyfunc model to return some interim results.

How should they configure the endpoint to pass the secrets and credentials?

Options:

A.

Use spark.conf.set ()


B.

Pass variables using the Databricks Feature Store API


C.

Add credentials using environment variables


D.

Pass the secrets in plain text


Expert Solution
Questions # 2:

A Generative Al Engineer is tasked with improving the RAG quality by addressing its inflammatory outputs.

Which action would be most effective in mitigating the problem of offensive text outputs?

Options:

A.

Increase the frequency of upstream data updates


B.

Inform the user of the expected RAG behavior


C.

Restrict access to the data sources to a limited number of users


D.

Curate upstream data properly that includes manual review before it is fed into the RAG system


Expert Solution
Questions # 3:

A Generative AI Engineer is creating an agent-based LLM system for their favorite monster truck team. The system can answer text based questions about the monster truck team, lookup event dates via an API call, or query tables on the team’s latest standings.

How could the Generative AI Engineer best design these capabilities into their system?

Options:

A.

Ingest PDF documents about the monster truck team into a vector store and query it in a RAG architecture.


B.

Write a system prompt for the agent listing available tools and bundle it into an agent system that runs a number of calls to solve a query.


C.

Instruct the LLM to respond with “RAG”, “API”, or “TABLE” depending on the query, then use text parsing and conditional statements to resolve the query.


D.

Build a system prompt with all possible event dates and table information in the system prompt. Use a RAG architecture to lookup generic text questions and otherwise leverage the information in the system prompt.


Expert Solution
Questions # 4:

A Generative Al Engineer has successfully ingested unstructured documents and chunked them by document sections. They would like to store the chunks in a Vector Search index. The current format of the dataframe has two columns: (i) original document file name (ii) an array of text chunks for each document.

What is the most performant way to store this dataframe?

Options:

A.

Split the data into train and test set, create a unique identifier for each document, then save to a Delta table


B.

Flatten the dataframe to one chunk per row, create a unique identifier for each row, and save to a Delta table


C.

First create a unique identifier for each document, then save to a Delta table


D.

Store each chunk as an independent JSON file in Unity Catalog Volume. For each JSON file, the key is the document section name and the value is the array of text chunks for that section


Expert Solution
Questions # 5:

A Generative AI Engineer is testing a simple prompt template in LangChain using the code below, but is getting an error:

Python

from langchain.chains import LLMChain

from langchain_community.llms import OpenAI

from langchain_core.prompts import PromptTemplate

prompt_template = "Tell me a {adjective} joke"

prompt = PromptTemplate(input_variables=["adjective"], template=prompt_template)

# ... (Error-prone section)

Assuming the API key was properly defined, what change does the Generative AI Engineer need to make to fix their chain?

Options:

A.

(Incorrect structure)


B.

(Incorrect structure)


C.

prompt_template = "Tell me a {adjective} joke"

prompt = PromptTemplate(input_variables=["adjective"], template=prompt_template)

llm = OpenAI()

llm_chain = LLMChain(prompt=prompt, llm=llm)

llm_chain.generate([{"adjective": "funny"}])


D.

(Incorrect structure)


Expert Solution
Questions # 6:

A Generative Al Engineer is helping a cinema extend its website's chat bot to be able to respond to questions about specific showtimes for movies currently playing at their local theater. They already have the location of the user provided by location services to their agent, and a Delta table which is continually updated with the latest showtime information by location. They want to implement this new capability In their RAG application.

Which option will do this with the least effort and in the most performant way?

Options:

A.

Create a Feature Serving Endpoint from a FeatureSpec that references an online store synced from the Delta table. Query the Feature Serving Endpoint as part of the agent logic / tool implementation.


B.

Query the Delta table directly via a SQL query constructed from the user's input using a text-to-SQL LLM in the agent logic / tool


C.

implementation. Write the Delta table contents to a text column.then embed those texts using an embedding model and store these in the vector index Look

up the information based on the embedding as part of the agent logic / tool implementation.


D.

Set up a task in Databricks Workflows to write the information in the Delta table periodically to an external database such as MySQL and query the information from there as part of the agent logic / tool implementation.


Expert Solution
Questions # 7:

A Generative AI Engineer I using the code below to test setting up a vector store:

Question # 7

Assuming they intend to use Databricks managed embeddings with the default embedding model, what should be the next logical function call?

Options:

A.

vsc.get_index()


B.

vsc.create_delta_sync_index()


C.

vsc.create_direct_access_index()


D.

vsc.similarity_search()


Expert Solution
Questions # 8:

A Generative AI Engineer just deployed an LLM application at a digital marketing company that assists with answering customer service inquiries.

Which metric should they monitor for their customer service LLM application in production?

Options:

A.

Number of customer inquiries processed per unit of time


B.

Energy usage per query


C.

Final perplexity scores for the training of the model


D.

HuggingFace Leaderboard values for the base LLM


Expert Solution
Questions # 9:

A small and cost-conscious startup in the cancer research field wants to build a RAG application using Foundation Model APIs.

Which strategy would allow the startup to build a good-quality RAG application while being cost-conscious and able to cater to customer needs?

Options:

A.

Limit the number of relevant documents available for the RAG application to retrieve from


B.

Pick a smaller LLM that is domain-specific


C.

Limit the number of queries a customer can send per day


D.

Use the largest LLM possible because that gives the best performance for any general queries


Expert Solution
Questions # 10:

A Generative AI Engineer has been asked to design an LLM-based application that accomplishes the following business objective: answer employee HR questions using HR PDF documentation.

Which set of high level tasks should the Generative AI Engineer's system perform?

Options:

A.

Calculate averaged embeddings for each HR document, compare embeddings to user query to find the best document. Pass the best document with the user query into an LLM with a large context window to generate a response to the employee.


B.

Use an LLM to summarize HR documentation. Provide summaries of documentation and user query into an LLM with a large context window to generate a response to the user.


C.

Create an interaction matrix of historical employee questions and HR documentation. Use ALS to factorize the matrix and create embeddings. Calculate the embeddings of new queries and use them to find the best HR documentation. Use an LLM to generate a response to the employee question based upon the documentation retrieved.


D.

Split HR documentation into chunks and embed into a vector store. Use the employee question to retrieve best matched chunks of documentation, and use the LLM to generate a response to the employee based upon the documentation retrieved.


Expert Solution
Viewing page 1 out of 3 pages
Viewing questions 1-10 out of questions