Spring Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: simple70

Pass the Databricks Generative AI Engineer Databricks-Generative-AI-Engineer-Associate Questions and answers with CertsForce

Viewing page 2 out of 3 pages
Viewing questions 11-20 out of questions
Questions # 11:

A Generative AI Engineer is designing a RAG application for answering user questions on technical regulations as they learn a new sport.

What are the steps needed to build this RAG application and deploy it?

Options:

A.

Ingest documents from a source –> Index the documents and saves to Vector Search –> User submits queries against an LLM –> LLM retrieves relevant documents –> Evaluate model –> LLM generates a response –> Deploy it using Model Serving


B.

Ingest documents from a source –> Index the documents and save to Vector Search –> User submits queries against an LLM –> LLM retrieves relevant documents –> LLM generates a response -> Evaluate model –> Deploy it using Model Serving


C.

Ingest documents from a source –> Index the documents and save to Vector Search –> Evaluate model –> Deploy it using Model Serving


D.

User submits queries against an LLM –> Ingest documents from a source –> Index the documents and save to Vector Search –> LLM retrieves relevant documents –> LLM generates a response –> Evaluate model –> Deploy it using Model Serving


Expert Solution
Questions # 12:

A Generative AI Engineer has a provisioned throughput model serving endpoint as part of a RAG application and would like to monitor the serving endpoint’s incoming requests and outgoing responses. The current approach is to include a micro-service in between the endpoint and the user interface to write logs to a remote server.

Which Databricks feature should they use instead which will perform the same task?

Options:

A.

Vector Search


B.

Lakeview


C.

DBSQL


D.

Inference Tables


Expert Solution
Questions # 13:

An AI developer team wants to fine-tune an open-weight model to have exceptional performance on a code generation use case. They are trying to choose the best model to start with. They want to minimize model hosting costs and are using Hugging Face model cards and spaces to explore models. Which TWO model attributes and metrics should the team focus on to make their selection?

Options:

A.

Big Code Models Leaderboard


B.

Number of model parameters


C.

MTEB Leaderboard


D.

Chatbot Arena Leaderboard


E.

Number of model downloads last month


Expert Solution
Questions # 14:

A Generative Al Engineer would like an LLM to generate formatted JSON from emails. This will require parsing and extracting the following information: order ID, date, and sender email. Here’s a sample email:

Question # 14

They will need to write a prompt that will extract the relevant information in JSON format with the highest level of output accuracy.

Which prompt will do that?

Options:

A.

You will receive customer emails and need to extract date, sender email, and order ID. You should return the date, sender email, and order ID information in JSON format.


B.

You will receive customer emails and need to extract date, sender email, and order ID. Return the extracted information in JSON format.

Here’s an example: {“date”: “April 16, 2024”, “sender_email”: “sarah.lee925@gmail.com”, “order_id”: “RE987D”}


C.

You will receive customer emails and need to extract date, sender email, and order ID. Return the extracted information in a human-readable format.


D.

You will receive customer emails and need to extract date, sender email, and order ID. Return the extracted information in JSON format.


Expert Solution
Questions # 15:

A Generative Al Engineer is building an LLM-based application that has an

important transcription (speech-to-text) task. Speed is essential for the success of the application

Which open Generative Al models should be used?

Options:

A.

L!ama-2-70b-chat-hf


B.

MPT-30B-lnstruct


C.

DBRX


D.

whisper-large-v3 (1.6B)


Expert Solution
Questions # 16:

A Generative AI Engineer is designing a chatbot for a gaming company that aims to engage users on its platform while its users play online video games.

Which metric would help them increase user engagement and retention for their platform?

Options:

A.

Randomness


B.

Diversity of responses


C.

Lack of relevance


D.

Repetition of responses


Expert Solution
Questions # 17:

A Generative AI Engineer is designing an LLM-powered live sports commentary platform. The platform provides real-time updates and LLM-generated analyses for any users who would like to have live summaries, rather than reading a series of potentially outdated news articles.

Which tool below will give the platform access to real-time data for generating game analyses based on the latest game scores?

Options:

A.

DatabrickslQ


B.

Foundation Model APIs


C.

Feature Serving


D.

AutoML


Expert Solution
Questions # 18:

A Generative AI Engineer developed an LLM application using the provisioned throughput Foundation Model API. Now that the application is ready to be deployed, they realize their volume of requests are not sufficiently high enough to create their own provisioned throughput endpoint. They want to choose a strategy that ensures the best cost-effectiveness for their application.

What strategy should the Generative AI Engineer use?

Options:

A.

Switch to using External Models instead


B.

Deploy the model using pay-per-token throughput as it comes with cost guarantees


C.

Change to a model with a fewer number of parameters in order to reduce hardware constraint issues


D.

Throttle the incoming batch of requests manually to avoid rate limiting issues


Expert Solution
Questions # 19:

A Generative Al Engineer is deciding between using LSH (Locality Sensitive Hashing) and HNSW (Hierarchical Navigable Small World) for indexing their vector database Their top priority is semantic accuracy

Which approach should the Generative Al Engineer use to evaluate these two techniques?

Options:

A.

Compare the cosine similarities of the embeddings of returned results against those of a representative sample of test inputs


B.

Compare the Bilingual Evaluation Understudy (BLEU) scores of returned results for a representative sample of test inputs


C.

Compare the Recall-Onented-Understudy for Gistmg Evaluation (ROUGE) scores of returned results for a representative sample of test inputs


D.

Compare the Levenshtein distances of returned results against a representative sample of test inputs


Expert Solution
Questions # 20:

A Generative Al Engineer is developing a RAG application and would like to experiment with different embedding models to improve the application performance.

Which strategy for picking an embedding model should they choose?

Options:

A.

Pick an embedding model trained on related domain knowledge


B.

Pick the most recent and most performant open LLM released at the time


C.

pick the embedding model ranked highest on the Massive Text Embedding Benchmark (MTEB) leaderboard hosted by HuggingFace


D.

Pick an embedding model with multilingual support to support potential multilingual user questions


Expert Solution
Viewing page 2 out of 3 pages
Viewing questions 11-20 out of questions