Pass the Databricks Databricks Certification Databricks-Certified-Professional-Data-Engineer Questions and answers with CertsForce

Viewing page 1 out of 4 pages
Viewing questions 1-10 out of questions
Questions # 1:

The data engineering team is migrating an enterprise system with thousands of tables and views into the Lakehouse. They plan to implement the target architecture using a series of bronze, silver, and gold tables. Bronze tables will almost exclusively be used by production data engineering workloads, while silver tables will be used to support both data engineering and machine learning workloads. Gold tables will largely serve business intelligence and reporting purposes. While personal identifying information (PII) exists in all tiers of data, pseudonymization and anonymization rules are in place for all data at the silver and gold levels.

The organization is interested in reducing security concerns while maximizing the ability to collaborate across diverse teams.

Which statement exemplifies best practices for implementing this system?

Options:

A.

Isolating tables in separate databases based on data quality tiers allows for easy permissions management through database ACLs and allows physical separation of default storage locations for managed tables.


B.

Because databases on Databricks are merely a logical construct, choices around database organization do not impact security or discoverability in the Lakehouse.


C.

Storinq all production tables in a single database provides a unified view of all data assets available throughout the Lakehouse, simplifying discoverability by granting all users view privileges on this database.


D.

Working in the default Databricks database provides the greatest security when working with managed tables, as these will be created in the DBFS root.


E.

Because all tables must live in the same storage containers used for the database they're created in, organizations should be prepared to create between dozens and thousands of databases depending on their data isolation requirements.


Expert Solution
Questions # 2:

A data architect has heard about lake's built-in versioning and time travel capabilities. For auditing purposes they have a requirement to maintain a full of all valid street addresses as they appear in the customers table.

The architect is interested in implementing a Type 1 table, overwriting existing records with new values and relying on Delta Lake time travel to support long-term auditing. A data engineer on the project feels that a Type 2 table will provide better performance and scalability.

Which piece of information is critical to this decision?

Options:

A.

Delta Lake time travel does not scale well in cost or latency to provide a long-term versioning solution.


B.

Delta Lake time travel cannot be used to query previous versions of these tables because Type 1 changes modify data files in place.


C.

Shallow clones can be combined with Type 1 tables to accelerate historic queries for long-term versioning.


D.

Data corruption can occur if a query fails in a partially completed state because Type 2 tables requires

Setting multiple fields in a single update.


Expert Solution
Questions # 3:

Where in the Spark UI can one diagnose a performance problem induced by not leveraging predicate push-down?

Options:

A.

In the Executor's log file, by gripping for "predicate push-down"


B.

In the Stage's Detail screen, in the Completed Stages table, by noting the size of data read from the Input column


C.

In the Storage Detail screen, by noting which RDDs are not stored on disk


D.

In the Delta Lake transaction log. by noting the column statistics


E.

In the Query Detail screen, by interpreting the Physical Plan


Expert Solution
Questions # 4:

A Delta Lake table was created with the below query:

Question # 4

Consider the following query:

DROP TABLE prod.sales_by_store -

If this statement is executed by a workspace admin, which result will occur?

Options:

A.

Nothing will occur until a COMMIT command is executed.


B.

The table will be removed from the catalog but the data will remain in storage.


C.

The table will be removed from the catalog and the data will be deleted.


D.

An error will occur because Delta Lake prevents the deletion of production data.


E.

Data will be marked as deleted but still recoverable with Time Travel.


Expert Solution
Questions # 5:

A small company based in the United States has recently contracted a consulting firm in India to implement several new data engineering pipelines to power artificial intelligence applications. All the company's data is stored in regional cloud storage in the United States.

The workspace administrator at the company is uncertain about where the Databricks workspace used by the contractors should be deployed.

Assuming that all data governance considerations are accounted for, which statement accurately informs this decision?

Options:

A.

Databricks runs HDFS on cloud volume storage; as such, cloud virtual machines must be deployed in the region where the data is stored.


B.

Databricks workspaces do not rely on any regional infrastructure; as such, the decision should be made based upon what is most convenient for the workspace administrator.


C.

Cross-region reads and writes can incur significant costs and latency; whenever possible, compute should be deployed in the same region the data is stored.


D.

Databricks leverages user workstations as the driver during interactive development; as such, users should always use a workspace deployed in a region they are physically near.


E.

Databricks notebooks send all executable code from the user's browser to virtual machines over the open internet; whenever possible, choosing a workspace region near the end users is the most secure.


Expert Solution
Questions # 6:

A junior data engineer seeks to leverage Delta Lake's Change Data Feed functionality to create a Type 1 table representing all of the values that have ever been valid for all rows in abronzetable created with the propertydelta.enableChangeDataFeed = true. They plan to execute the following code as a daily job:

Question # 6

Which statement describes the execution and results of running the above query multiple times?

Options:

A.

Each time the job is executed, newly updated records will be merged into the target table, overwriting previous values with the same primary keys.


B.

Each time the job is executed, the entire available history of inserted or updated records will be appended to the target table, resulting in many duplicate entries.


C.

Each time the job is executed, the target table will be overwritten using the entire history of inserted or updated records, giving the desired result.


D.

Each time the job is executed, the differences between the original and current versions are calculated; this may result in duplicate entries for some records.


E.

Each time the job is executed, only those records that have been inserted or updated since the last execution will be appended to the target table giving the desired result.


Expert Solution
Questions # 7:

An hourly batch job is configured to ingest data files from a cloud object storage container where each batch represent all records produced by the source system in a given hour. The batch job to process these records into the Lakehouse is sufficiently delayed to ensure no late-arriving data is missed. Theuser_idfield represents a unique key for the data, which has the following schema:

user_id BIGINT, username STRING, user_utc STRING, user_region STRING, last_login BIGINT, auto_pay BOOLEAN, last_updated BIGINT

New records are all ingested into a table namedaccount_historywhich maintains a full record of all data in the same schema as the source. The next table in the system is namedaccount_currentand is implemented as a Type 1 table representing the most recent value for each uniqueuser_id.

Assuming there are millions of user accounts and tens of thousands of records processed hourly, which implementation can be used to efficiently update the describedaccount_currenttable as part of each hourly batch job?

Options:

A.

Use Auto Loader to subscribe to new files in the account history directory; configure a Structured Streaminq trigger once job to batch update newly detected files into the account current table.


B.

Overwrite the account current table with each batch using the results of a query against the account history table grouping by user id and filtering for the max value of last updated.


C.

Filter records in account history using the last updated field and the most recent hour processed, as well as the max last iogin by user id write a merge statement to update or insert the most recent value for each user id.


D.

Use Delta Lake version history to get the difference between the latest version of account history and one version prior, then write these records to account current.


E.

Filter records in account history using the last updated field and the most recent hour processed, making sure to deduplicate on username; write a merge statement to update or insert the

most recent value for each username.


Expert Solution
Questions # 8:

When evaluating the Ganglia Metrics for a given cluster with 3 executor nodes, which indicator would signal proper utilization of the VM's resources?

Options:

A.

The five Minute Load Average remains consistent/flat


B.

Bytes Received never exceeds 80 million bytes per second


C.

Network I/O never spikes


D.

Total Disk Space remains constant


E.

CPU Utilization is around 75%


Expert Solution
Questions # 9:

The Databricks CLI is use to trigger a run of an existing job by passing the job_id parameter. The response that the job run request has been submitted successfully includes a filed run_id.

Which statement describes what the number alongside this field represents?

Options:

A.

The job_id is returned in this field.


B.

The job_id and number of times the job has been are concatenated and returned.


C.

The number of times the job definition has been run in the workspace.


D.

The globally unique ID of the newly triggered run.


Expert Solution
Questions # 10:

The business reporting tem requires that data for their dashboards be updated every hour. The total processing time for the pipeline that extracts transforms and load the data for their pipeline runs in 10 minutes.

Assuming normal operating conditions, which configuration will meet their service-level agreement requirements with the lowest cost?

Options:

A.

Schedule a jo to execute the pipeline once and hour on a dedicated interactive cluster.


B.

Schedule a Structured Streaming job with a trigger interval of 60 minutes.


C.

Schedule a job to execute the pipeline once hour on a new job cluster.


D.

Configure a job that executes every time new data lands in a given directory.


Expert Solution
Viewing page 1 out of 4 pages
Viewing questions 1-10 out of questions