Databricks Certified Associate Developer for Apache Spark 3.0 Exam Databricks-Certified-Associate-Developer-for-Apache-Spark-3.0 Question # 46 Topic 5 Discussion

Databricks Certified Associate Developer for Apache Spark 3.0 Exam Databricks-Certified-Associate-Developer-for-Apache-Spark-3.0 Question # 46 Topic 5 Discussion

Databricks-Certified-Associate-Developer-for-Apache-Spark-3.0 Exam Topic 5 Question 46 Discussion:
Question #: 46
Topic #: 5

Which of the following describes how Spark achieves fault tolerance?


A.

Spark helps fast recovery of data in case of a worker fault by providing the MEMORY_AND_DISK storage level option.


B.

If an executor on a worker node fails while calculating an RDD, that RDD can be recomputed by another executor using the lineage.


C.

Spark builds a fault-tolerant layer on top of the legacy RDD data system, which by itself is not fault tolerant.


D.

Due to the mutability of DataFrames after transformations, Spark reproduces them using observed lineage in case of worker node failure.


E.

Spark is only fault-tolerant if this feature is specifically enabled via the spark.fault_recovery.enabled property.


Get Premium Databricks-Certified-Associate-Developer-for-Apache-Spark-3.0 Questions

Contribute your Thoughts:


Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.