Pass the Databricks Databricks Certification Databricks-Certified-Associate-Developer-for-Apache-Spark-3.0 Questions and answers with CertsForce

Viewing page 2 out of 6 pages
Viewing questions 11-20 out of questions
Questions # 11:

Which of the following code blocks stores a part of the data in DataFrame itemsDf on executors?

Options:

A.

itemsDf.cache().count()


B.

itemsDf.cache(eager=True)


C.

cache(itemsDf)


D.

itemsDf.cache().filter()


E.

itemsDf.rdd.storeCopy()


Expert Solution
Questions # 12:

The code block shown below should return the number of columns in the CSV file stored at location filePath. From the CSV file, only lines should be read that do not start with a # character. Choose

the answer that correctly fills the blanks in the code block to accomplish this.

Code block:

__1__(__2__.__3__.csv(filePath, __4__).__5__)

Options:

A.

1. size

2. spark

3. read()

4. escape='#'

5. columns


B.

1. DataFrame

2. spark

3. read()

4. escape='#'

5. shape[0]


C.

1. len

2. pyspark

3. DataFrameReader

4. comment='#'

5. columns


D.

1. size

2. pyspark

3. DataFrameReader

4. comment='#'

5. columns


E.

1. len

2. spark

3. read

4. comment='#'

5. columns


Expert Solution
Questions # 13:

Which of the following code blocks returns a DataFrame where columns predError and productId are removed from DataFrame transactionsDf?

Sample of DataFrame transactionsDf:

1.+-------------+---------+-----+-------+---------+----+

2.|transactionId|predError|value|storeId|productId|f |

3.+-------------+---------+-----+-------+---------+----+

4.|1 |3 |4 |25 |1 |null|

5.|2 |6 |7 |2 |2 |null|

6.|3 |3 |null |25 |3 |null|

7.+-------------+---------+-----+-------+---------+----+

Options:

A.

transactionsDf.withColumnRemoved("predError", "productId")


B.

transactionsDf.drop(["predError", "productId", "associateId"])


C.

transactionsDf.drop("predError", "productId", "associateId")


D.

transactionsDf.dropColumns("predError", "productId", "associateId")


E.

transactionsDf.drop(col("predError", "productId"))


Expert Solution
Questions # 14:

The code block shown below should return a one-column DataFrame where the column storeId is converted to string type. Choose the answer that correctly fills the blanks in the code block to

accomplish this.

transactionsDf.__1__(__2__.__3__(__4__))

Options:

A.

1. select

2. col("storeId")

3. cast

4. StringType


B.

1. select

2. col("storeId")

3. as

4. StringType


C.

1. cast

2. "storeId"

3. as

4. StringType()


D.

1. select

2. col("storeId")

3. cast

4. StringType()


E.

1. select

2. storeId

3. cast

4. StringType()


Expert Solution
Questions # 15:

Which of the following code blocks stores DataFrame itemsDf in executor memory and, if insufficient memory is available, serializes it and saves it to disk?

Options:

A.

itemsDf.persist(StorageLevel.MEMORY_ONLY)


B.

itemsDf.cache(StorageLevel.MEMORY_AND_DISK)


C.

itemsDf.store()


D.

itemsDf.cache()


E.

itemsDf.write.option('destination', 'memory').save()


Expert Solution
Questions # 16:

The code block displayed below contains an error. The code block should arrange the rows of DataFrame transactionsDf using information from two columns in an ordered fashion, arranging first by

column value, showing smaller numbers at the top and greater numbers at the bottom, and then by column predError, for which all values should be arranged in the inverse way of the order of items

in column value. Find the error.

Code block:

transactionsDf.orderBy('value', asc_nulls_first(col('predError')))

Options:

A.

Two orderBy statements with calls to the individual columns should be chained, instead of having both columns in one orderBy statement.


B.

Column value should be wrapped by the col() operator.


C.

Column predError should be sorted in a descending way, putting nulls last.


D.

Column predError should be sorted by desc_nulls_first() instead.


E.

Instead of orderBy, sort should be used.


Expert Solution
Questions # 17:

Which of the following code blocks creates a new one-column, two-row DataFrame dfDates with column date of type timestamp?

Options:

A.

1.dfDates = spark.createDataFrame(["23/01/2022 11:28:12","24/01/2022 10:58:34"], ["date"])

2.dfDates = dfDates.withColumn("date", to_timestamp("dd/MM/yyyy HH:mm:ss", "date"))


B.

1.dfDates = spark.createDataFrame([("23/01/2022 11:28:12",),("24/01/2022 10:58:34",)], ["date"])

2.dfDates = dfDates.withColumnRenamed("date", to_timestamp("date", "yyyy-MM-dd HH:mm:ss"))


C.

1.dfDates = spark.createDataFrame([("23/01/2022 11:28:12",),("24/01/2022 10:58:34",)], ["date"])

2.dfDates = dfDates.withColumn("date", to_timestamp("date", "dd/MM/yyyy HH:mm:ss"))


D.

1.dfDates = spark.createDataFrame(["23/01/2022 11:28:12","24/01/2022 10:58:34"], ["date"])

2.dfDates = dfDates.withColumnRenamed("date", to_datetime("date", "yyyy-MM-dd HH:mm:ss"))


E.

1.dfDates = spark.createDataFrame([("23/01/2022 11:28:12",),("24/01/2022 10:58:34",)], ["date"])


Expert Solution
Questions # 18:

Which of the following describes characteristics of the Dataset API?

Options:

A.

The Dataset API does not support unstructured data.


B.

In Python, the Dataset API mainly resembles Pandas' DataFrame API.


C.

In Python, the Dataset API's schema is constructed via type hints.


D.

The Dataset API is available in Scala, but it is not available in Python.


E.

The Dataset API does not provide compile-time type safety.


Expert Solution
Questions # 19:

In which order should the code blocks shown below be run in order to create a DataFrame that shows the mean of column predError of DataFrame transactionsDf per column storeId and productId,

where productId should be either 2 or 3 and the returned DataFrame should be sorted in ascending order by column storeId, leaving out any nulls in that column?

DataFrame transactionsDf:

1.+-------------+---------+-----+-------+---------+----+

2.|transactionId|predError|value|storeId|productId| f|

3.+-------------+---------+-----+-------+---------+----+

4.| 1| 3| 4| 25| 1|null|

5.| 2| 6| 7| 2| 2|null|

6.| 3| 3| null| 25| 3|null|

7.| 4| null| null| 3| 2|null|

8.| 5| null| null| null| 2|null|

9.| 6| 3| 2| 25| 2|null|

10.+-------------+---------+-----+-------+---------+----+

1. .mean("predError")

2. .groupBy("storeId")

3. .orderBy("storeId")

4. transactionsDf.filter(transactionsDf.storeId.isNotNull())

5. .pivot("productId", [2, 3])

Options:

A.

4, 5, 2, 3, 1


B.

4, 2, 1


C.

4, 1, 5, 2, 3


D.

4, 2, 5, 1, 3


E.

4, 3, 2, 5, 1


Expert Solution
Questions # 20:

Which of the following describes the conversion of a computational query into an execution plan in Spark?

Options:

A.

Spark uses the catalog to resolve the optimized logical plan.


B.

The catalog assigns specific resources to the optimized memory plan.


C.

The executed physical plan depends on a cost optimization from a previous stage.


D.

Depending on whether DataFrame API or SQL API are used, the physical plan may differ.


E.

The catalog assigns specific resources to the physical plan.


Expert Solution
Viewing page 2 out of 6 pages
Viewing questions 11-20 out of questions