Pass the Oracle Oracle Cloud Infrastructure 1z0-1127-25 Questions and answers with CertsForce

Viewing page 1 out of 3 pages
Viewing questions 1-10 out of questions
Questions # 1:

Which is NOT a typical use case for LangSmith Evaluators?

Options:

A.

Measuring coherence of generated text


B.

Aligning code readability


C.

Evaluating factual accuracy of outputs


D.

Detecting bias or toxicity


Questions # 2:

What does in-context learning in Large Language Models involve?

Options:

A.

Pretraining the model on a specific domain


B.

Training the model using reinforcement learning


C.

Conditioning the model with task-specific instructions or demonstrations


D.

Adding more layers to the model


Questions # 3:

What does a cosine distance of 0 indicate about the relationship between two embeddings?

Options:

A.

They are completely dissimilar


B.

They are unrelated


C.

They are similar in direction


D.

They have the same magnitude


Questions # 4:

What does the term "hallucination" refer to in the context of Large Language Models (LLMs)?

Options:

A.

The model's ability to generate imaginative and creative content


B.

A technique used to enhance the model's performance on specific tasks


C.

The process by which the model visualizes and describes images in detail


D.

The phenomenon where the model generates factually incorrect information or unrelated content as if it were true


Questions # 5:

What is the purpose of frequency penalties in language model outputs?

Options:

A.

To ensure that tokens that appear frequently are used more often


B.

To penalize tokens that have already appeared, based on the number of times they have been used


C.

To reward the tokens that have never appeared in the text


D.

To randomly penalize some tokens to increase the diversity of the text


Questions # 6:

Which is a distinguishing feature of "Parameter-Efficient Fine-Tuning (PEFT)" as opposed to classic "Fine-tuning" in Large Language Model training?

Options:

A.

PEFT involves only a few or new parameters and uses labeled, task-specific data.


B.

PEFT modifies all parameters and is typically used when no training data exists.


C.

PEFT does not modify any parameters but uses soft prompting with unlabeled data.


D.

PEFT modifies all parameters and uses unlabeled, task-agnostic data.


Questions # 7:

What is the purpose of memory in the LangChain framework?

Options:

A.

To retrieve user input and provide real-time output only


B.

To store various types of data and provide algorithms for summarizing past interactions


C.

To perform complex calculations unrelated to user interaction


D.

To act as a static database for storing permanent records


Questions # 8:

In which scenario is soft prompting especially appropriate compared to other training styles?

Options:

A.

When there is a significant amount of labeled, task-specific data available.


B.

When the model needs to be adapted to perform well in a different domain it was not originally trained on.


C.

When there is a need to add learnable parameters to a Large Language Model (LLM) without task-specific training.


D.

When the model requires continued pre-training on unlabeled data.


Questions # 9:

What is the primary function of the "temperature" parameter in the OCI Generative AI Generation models?

Options:

A.

Controls the randomness of the model's output, affecting its creativity


B.

Specifies a string that tells the model to stop generating more content


C.

Assigns a penalty to tokens that have already appeared in the preceding text


D.

Determines the maximum number of tokens the model can generate per response


Questions # 10:

What is the characteristic of T-Few fine-tuning for Large Language Models (LLMs)?

Options:

A.

It updates all the weights of the model uniformly.


B.

It selectively updates only a fraction of weights to reduce the number of parameters.


C.

It selectively updates only a fraction of weights to reduce computational load and avoid overfitting.


D.

It increases the training time as compared to Vanilla fine-tuning.


Viewing page 1 out of 3 pages
Viewing questions 1-10 out of questions