Spring Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: simple70

Pass the Google Google Cloud Certified Professional-Data-Engineer Questions and answers with CertsForce

Viewing page 8 out of 8 pages
Viewing questions 71-80 out of questions
Questions # 71:

You create a new report for your large team in Google Data Studio 360. The report uses Google BigQuery as its data source. It is company policy to ensure employees can view only the data associated with their region, so you create and populate a table for each region. You need to enforce the regional access policy to the data.

Which two actions should you take? (Choose two.)

Options:

A.

Ensure all the tables are included in global dataset.


B.

Ensure each table is included in a dataset for a region.


C.

Adjust the settings for each table to allow a related region-based security group view access.


D.

Adjust the settings for each view to allow a related region-based security group view access.


E.

Adjust the settings for each dataset to allow a related region-based security group view access.


Expert Solution
Questions # 72:

Which of the following IAM roles does your Compute Engine account require to be able to run pipeline jobs?

Options:

A.

dataflow.worker


B.

dataflow.compute


C.

dataflow.developer


D.

dataflow.viewer


Expert Solution
Questions # 73:

An external customer provides you with a daily dump of data from their database. The data flows into Google Cloud Storage GCS as comma-separated values (CSV) files. You want to analyze this data in Google BigQuery, but the data could have rows that are formatted incorrectly or corrupted. How should you build this pipeline?

Options:

A.

Use federated data sources, and check data in the SQL query.


B.

Enable BigQuery monitoring in Google Stackdriver and create an alert.


C.

Import the data into BigQuery using the gcloud CLI and set max_bad_records to 0.


D.

Run a Google Cloud Dataflow batch pipeline to import the data into BigQuery, and push errors to another dead-letter table for analysis.


Expert Solution
Questions # 74:

You have Google Cloud Dataflow streaming pipeline running with a Google Cloud Pub/Sub subscription as the source. You need to make an update to the code that will make the new Cloud Dataflow pipeline incompatible with the current version. You do not want to lose any data when making this update. What should you do?

Options:

A.

Update the current pipeline and use the drain flag.


B.

Update the current pipeline and provide the transform mapping JSON object.


C.

Create a new pipeline that has the same Cloud Pub/Sub subscription and cancel the old pipeline.


D.

Create a new pipeline that has a new Cloud Pub/Sub subscription and cancel the old pipeline.


Expert Solution
Questions # 75:

You are building new real-time data warehouse for your company and will use Google BigQuery streaming inserts. There is no guarantee that data will only be sent in once but you do have a unique ID for each row of data and an event timestamp. You want to ensure that duplicates are not included while interactively querying data. Which query type should you use?

Options:

A.

Include ORDER BY DESK on timestamp column and LIMIT to 1.


B.

Use GROUP BY on the unique ID column and timestamp column and SUM on the values.


C.

Use the LAG window function with PARTITION by unique ID along with WHERE LAG IS NOT NULL.


D.

Use the ROW_NUMBER window function with PARTITION by unique ID along with WHERE row equals 1.


Expert Solution
Questions # 76:

You have enabled the free integration between Firebase Analytics and Google BigQuery. Firebase now

automatically creates a new table daily in BigQuery in the format app_events_YYYYMMDD. You want to

query all of the tables for the past 30 days in legacy SQL. What should you do?

Options:

A.

Use the TABLE_DATE_RANGE function


B.

Use the WHERE_PARTITIONTIME pseudo column


C.

Use WHERE date BETWEEN YYYY-MM-DD AND YYYY-MM-DD


D.

Use SELECT IF.(date >= YYYY-MM-DD AND date <= YYYY-MM-DD


Expert Solution
Questions # 77:

You manage your company's BigQuery data warehouse. You need to implement a solution that enables the data science team to modify data for experiments without affecting the original tables, while minimizing additional storage costs. What should you do?

Options:

A.

Set up authorized views in a shared dataset that reference the original tables.


B.

Create snapshots of all the tables and restore them for the data science team to use.


C.

Create table clones of all the tables for the data science team to use.


D.

Create a separate dataset with full copies of all the tables for each member of the data science team.


Expert Solution
Questions # 78:

Your company is migrating their 30-node Apache Hadoop cluster to the cloud. They want to re-use Hadoop jobs they have already created and minimize the management of the cluster as much as possible. They also want to be able to persist data beyond the life of the cluster. What should you do?

Options:

A.

Create a Google Cloud Dataflow job to process the data.


B.

Create a Google Cloud Dataproc cluster that uses persistent disks for HDFS.


C.

Create a Hadoop cluster on Google Compute Engine that uses persistent disks.


D.

Create a Cloud Dataproc cluster that uses the Google Cloud Storage connector.


E.

Create a Hadoop cluster on Google Compute Engine that uses Local SSD disks.


Expert Solution
Questions # 79:

You need to store and analyze social media postings in Google BigQuery at a rate of 10,000 messages per minute in near real-time. Initially, design the application to use streaming inserts for individual postings. Your application also performs data aggregations right after the streaming inserts. You discover that the queries after streaming inserts do not exhibit strong consistency, and reports from the queries might miss in-flight data. How can you adjust your application design?

Options:

A.

Re-write the application to load accumulated data every 2 minutes.


B.

Convert the streaming insert code to batch load for individual messages.


C.

Load the original message to Google Cloud SQL, and export the table every hour to BigQuery via streaming inserts.


D.

Estimate the average latency for data availability after streaming inserts, and always run queries after waiting twice as long.


Expert Solution
Questions # 80:

You work for a car manufacturer and have set up a data pipeline using Google Cloud Pub/Sub to capture anomalous sensor events. You are using a push subscription in Cloud Pub/Sub that calls a custom HTTPS endpoint that you have created to take action of these anomalous events as they occur. Your custom HTTPS endpoint keeps getting an inordinate amount of duplicate messages. What is the most likely cause of these duplicate messages?

Options:

A.

The message body for the sensor event is too large.


B.

Your custom endpoint has an out-of-date SSL certificate.


C.

The Cloud Pub/Sub topic has too many messages published to it.


D.

Your custom endpoint is not acknowledging messages within the acknowledgement deadline.


Expert Solution
Viewing page 8 out of 8 pages
Viewing questions 71-80 out of questions