Pass the Oracle Oracle Cloud Infrastructure 1z0-1127-25 Questions and answers with CertsForce

Viewing page 3 out of 3 pages
Viewing questions 21-30 out of questions
Questions # 21:

How do Dot Product and Cosine Distance differ in their application to comparing text embeddings in natural language processing?

Options:

A.

Dot Product assesses the overall similarity in content, whereas Cosine Distance measures topical relevance.


B.

Dot Product is used for semantic analysis, whereas Cosine Distance is used for syntactic comparisons.


C.

Dot Product measures the magnitude and direction of vectors, whereas Cosine Distance focuses on the orientation regardless of magnitude.


D.

Dot Product calculates the literal overlap of words, whereas Cosine Distance evaluates the stylistic similarity.


Expert Solution
Questions # 22:

When is fine-tuning an appropriate method for customizing a Large Language Model (LLM)?

Options:

A.

When the LLM already understands the topics necessary for text generation


B.

When the LLM does not perform well on a task and the data for prompt engineering is too large


C.

When the LLM requires access to the latest data for generating outputs


D.

When you want to optimize the model without any instructions


Expert Solution
Questions # 23:

You create a fine-tuning dedicated AI cluster to customize a foundational model with your custom training data. How many unit hours are required for fine-tuning if the cluster is active for 10 days?

Options:

A.

480 unit hours


B.

240 unit hours


C.

744 unit hours


D.

20 unit hours


Expert Solution
Questions # 24:

How does a presence penalty function in language model generation when using OCI Generative AI service?

Options:

A.

It penalizes all tokens equally, regardless of how often they have appeared.


B.

It only penalizes tokens that have never appeared in the text before.


C.

It applies a penalty only if the token has appeared more than twice.


D.

It penalizes a token each time it appears after the first occurrence.


Expert Solution
Questions # 25:

What does the RAG Sequence model do in the context of generating a response?

Options:

A.

It retrieves a single relevant document for the entire input query and generates a response based on that alone.


B.

For each input query, it retrieves a set of relevant documents and considers them together to generate a cohesive response.


C.

It retrieves relevant documents only for the initial part of the query and ignores the rest.


D.

It modifies the input query before retrieving relevant documents to ensure a diverse response.


Expert Solution
Questions # 26:

Which technique involves prompting the Large Language Model (LLM) to emit intermediate reasoning steps as part of its response?

Options:

A.

Step-Back Prompting


B.

Chain-of-Thought


C.

Least-to-Most Prompting


D.

In-Context Learning


Expert Solution
Viewing page 3 out of 3 pages
Viewing questions 21-30 out of questions