Google Professional Data Engineer Exam Professional-Data-Engineer Question # 29 Topic 3 Discussion

Google Professional Data Engineer Exam Professional-Data-Engineer Question # 29 Topic 3 Discussion

Professional-Data-Engineer Exam Topic 3 Question 29 Discussion:
Question #: 29
Topic #: 3

Flowlogistic wants to use Google BigQuery as their primary analysis system, but they still have Apache Hadoop and Spark workloads that they cannot move to BigQuery. Flowlogistic does not know how to store the data that is common to both workloads. What should they do?


A.

Store the common data in BigQuery as partitioned tables.


B.

Store the common data in BigQuery and expose authorized views.


C.

Store the common data encoded as Avro in Google Cloud Storage.


D.

Store he common data in the HDFS storage for a Google Cloud Dataproc cluster.


Get Premium Professional-Data-Engineer Questions

Contribute your Thoughts:


Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.