Databricks Certified Data Engineer Professional Exam Databricks-Certified-Professional-Data-Engineer Question # 12 Topic 2 Discussion

Databricks Certified Data Engineer Professional Exam Databricks-Certified-Professional-Data-Engineer Question # 12 Topic 2 Discussion

Databricks-Certified-Professional-Data-Engineer Exam Topic 2 Question 12 Discussion:
Question #: 12
Topic #: 2

A Spark job is taking longer than expected. Using the Spark UI, a data engineer notes that the Min, Median, and Max Durations for tasks in a particular stage show the minimum and median time to complete a task as roughly the same, but the max duration for a task to be roughly 100 times as long as the minimum.

Which situation is causing increased duration of the overall job?


A.

Task queueing resulting from improper thread pool assignment.


B.

Spill resulting from attached volume storage being too small.


C.

Network latency due to some cluster nodes being in different regions from the source data


D.

Skew caused by more data being assigned to a subset of spark-partitions.


E.

Credential validation errors while pulling data from an external system.


Get Premium Databricks-Certified-Professional-Data-Engineer Questions

Contribute your Thoughts:


Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.