New Year Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: simple70

Google Cloud Associate Data Practitioner (ADP Exam) Associate-Data-Practitioner Question # 24 Topic 3 Discussion

Google Cloud Associate Data Practitioner (ADP Exam) Associate-Data-Practitioner Question # 24 Topic 3 Discussion

Associate-Data-Practitioner Exam Topic 3 Question 24 Discussion:
Question #: 24
Topic #: 3

Your organization needs to implement near real-time analytics for thousands of events arriving each second in Pub/Sub. The incoming messages require transformations. You need to configure a pipelinethat processes, transforms, and loads the data into BigQuery while minimizing development time. What should you do?


A.

Use a Google-provided Dataflow template to process the Pub/Sub messages, perform transformations, and write the results to BigQuery.


B.

Create a Cloud Data Fusion instance and configure Pub/Sub as a source. Use Data Fusion to process the Pub/Sub messages, perform transformations, and write the results to BigQuery.


C.

Load the data from Pub/Sub into Cloud Storage using a Cloud Storage subscription. Create a Dataproc cluster, use PySpark to perform transformations in Cloud Storage, and write the results to BigQuery.


D.

Use Cloud Run functions to process the Pub/Sub messages, perform transformations, and write the results to BigQuery.


Get Premium Associate-Data-Practitioner Questions

Contribute your Thoughts:


Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.