Databricks Certified Associate Developer for Apache Spark 3.5-Python Databricks-Certified-Associate-Developer-for-Apache-Spark-3.5 Question # 18 Topic 2 Discussion

Databricks Certified Associate Developer for Apache Spark 3.5-Python Databricks-Certified-Associate-Developer-for-Apache-Spark-3.5 Question # 18 Topic 2 Discussion

Databricks-Certified-Associate-Developer-for-Apache-Spark-3.5 Exam Topic 2 Question 18 Discussion:
Question #: 18
Topic #: 2

A data engineer uses a broadcast variable to share a DataFrame containing millions of rows across executors for lookup purposes. What will be the outcome?


A.

The job may fail if the memory on each executor is not large enough to accommodate the DataFrame being broadcasted


B.

The job may fail if the executors do not have enough CPU cores to process the broadcasted dataset


C.

The job will hang indefinitely as Spark will struggle to distribute and serialize such a large broadcast variable to all executors


D.

The job may fail because the driver does not have enough CPU cores to serialize the large DataFrame


Get Premium Databricks-Certified-Associate-Developer-for-Apache-Spark-3.5 Questions

Contribute your Thoughts:


Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.