New Year Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: simple70

Google Cloud Associate Data Practitioner (ADP Exam) Associate-Data-Practitioner Question # 31 Topic 4 Discussion

Google Cloud Associate Data Practitioner (ADP Exam) Associate-Data-Practitioner Question # 31 Topic 4 Discussion

Associate-Data-Practitioner Exam Topic 4 Question 31 Discussion:
Question #: 31
Topic #: 4

You work for a gaming company that collects real-time player activity data. This data is streamed into Pub/Sub and needs to be processed and loaded into BigQuery for analysis. The processing involves filtering, enriching, and aggregating the data before loading it into partitioned BigQuery tables. Youneed to design a pipeline that ensures low latency and high throughput while following a Google-recommended approach. What should you do?


A.

Use Cloud Composer to orchestrate a workflow that reads the data from Pub/Sub, processes the data using a Python script, and writes it to BigQuery.


B.

Use Dataproc to create an Apache Spark streaming job that reads the data from Pub/Sub, processes the data, and writes it to BigQuery.


C.

Use Dataflow to create a streaming pipeline that reads the data from Pub/Sub, processes the data, and writes it to BigQuery using the streaming API.


D.

Use Cloud Run functions to subscribe to the Pub/Sub topic, process the data, and write it to BigQuery using the streaming API.


Get Premium Associate-Data-Practitioner Questions

Contribute your Thoughts:


Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.