New Year Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: simple70

Google Cloud Associate Data Practitioner (ADP Exam) Associate-Data-Practitioner Question # 22 Topic 3 Discussion

Google Cloud Associate Data Practitioner (ADP Exam) Associate-Data-Practitioner Question # 22 Topic 3 Discussion

Associate-Data-Practitioner Exam Topic 3 Question 22 Discussion:
Question #: 22
Topic #: 3

Your organization has a petabyte of application logs stored as Parquet files in Cloud Storage. You need to quickly perform a one-time SQL-based analysis of the files and join them to data that already resides in BigQuery. What should you do?


A.

Create a Dataproc cluster, and write a PySpark job to join the data from BigQuery to the files in Cloud Storage.


B.

Launch a Cloud Data Fusion environment, use plugins to connect to BigQuery and Cloud Storage, and use the SQL join operation to analyze the data.


C.

Create external tables over the files in Cloud Storage, and perform SQL joins to tables in BigQuery to analyze the data.


D.

Use the bq load command to load the Parquet files into BigQuery, and perform SQL joins to analyze the data.


Get Premium Associate-Data-Practitioner Questions

Contribute your Thoughts:


Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.