Month End Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: simple70

Pass the Microsoft Microsoft Certified: Fabric Data Engineer Associate DP-700 Questions and answers with CertsForce

Viewing page 2 out of 3 pages
Viewing questions 11-20 out of questions
Questions # 11:

You need to populate the MAR1 data in the bronze layer.

Which two types of activities should you include in the pipeline? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.

Options:

A.

ForEach


B.

Copy data


C.

WebHook


D.

Stored procedure


Expert Solution
Questions # 12:

You need to ensure that the data engineers are notified if any step in populating the lakehouses fails. The solution must meet the technical requirements and minimize development effort.

What should you use? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Question # 12


Expert Solution
Questions # 13:

You need to implement the solution for the book reviews.

Which should you do?

Options:

A.

Create a Dataflow Gen2 dataflow.


B.

Create a shortcut.


C.

Enable external data sharing.


D.

Create a data pipeline.


Expert Solution
Questions # 14:

You have an Azure subscription that contains a blob storage account named sa1. Sa1 contains two files named Filelxsv and File2.csv.

You have a Fabric tenant that contains the items shown in the following table.

Question # 14

You need to configure Pipeline1 to perform the following actions:

• At 2 PM each day, process Filel.csv and load the file into flhl.

• At 5 PM each day. process File2.csv and load the file into flhl.

The solution must minimize development effort. What should you use?

Options:

A.

a job definition


B.

a data pipeline schedule


C.

a data pipeline trigger


D.

an activator


Expert Solution
Questions # 15:

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have a KQL database that contains two tables named Stream and Reference. Stream contains streaming data in the following format.

Question # 15

Reference contains reference data in the following format.

Question # 15

Both tables contain millions of rows.

You have the following KQL queryset.

Question # 15

You need to reduce how long it takes to run the KQL queryset.

Solution: You change project to extend.

Does this meet the goal?

Options:

A.

Yes


B.

No


Expert Solution
Questions # 16:

You have a Fabric workspace that contains a warehouse named DW1. DW1 is loaded by using a notebook named Notebook1.

You need to identify which version of Delta was used when Notebook1 was executed.

What should you use?

Options:

A.

Real-Time hub


B.

OneLake data hub


C.

the Admin monitoring workspace


D.

Fabric Monitor


E.

the Microsoft Fabric Capacity Metrics app


Expert Solution
Questions # 17:

You have a Fabric workspace that contains a lakehouse named Lakehouse1.

In an external data source, you have data files that are 500 GB each. A new file is added every day.

You need to ingest the data into Lakehouse1 without applying any transformations. The solution must meet the following requirements

Trigger the process when a new file is added.

Provide the highest throughput.

Which type of item should you use to ingest the data?

Options:

A.

Event stream


B.

Dataflow Gen2


C.

Streaming dataset


D.

Data pipeline


Expert Solution
Questions # 18:

You have a Fabric warehouse named DW1 that contains four staging tables named ProductCategory, ProductSubcategory, Product, and SalesOrder. ProductCategory, ProductSubcategory, and Product are used often in analytical queries.

You need to implement a star schema for DW1. The solution must minimize development effort.

Which design approach should you use? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Question # 18


Expert Solution
Questions # 19:

You have a Fabric workspace that contains a lakehouse named Lakehousel.

You plan to create a data pipeline named Pipeline! to ingest data into Lakehousel. You will use a parameter named paraml to pass an external value into Pipeline1!. The paraml parameter has a data type of int

You need to ensure that the pipeline expression returns param1 as an int value.

How should you specify the parameter value?

Options:

A.

"@pipeline(). parameters. paraml"


B.

"@{pipeline().parameters.paraml}"


C.

"@{pipeline().parameters.[paraml]}"


D.

"@{pipeline().parameters.paraml}-


Expert Solution
Questions # 20:

You are building a data orchestration pattern by using a Fabric data pipeline named Dynamic Data Copy as shown in the exhibit. (Click the Exhibit tab.)

Question # 20

Dynamic Data Copy does NOT use parametrization.

You need to configure the ForEach activity to receive the list of tables to be copied.

How should you complete the pipeline expression? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Question # 20


Expert Solution
Viewing page 2 out of 3 pages
Viewing questions 11-20 out of questions