Pass the Google Google Cloud Certified Professional-Data-Engineer Questions and answers with CertsForce

Viewing page 1 out of 7 pages
Viewing questions 1-10 out of questions
Questions # 1:

MJTelco is building a custom interface to share data. They have these requirements:

They need to do aggregations over their petabyte-scale datasets.

They need to scan specific time range rows with a very fast response time (milliseconds).

Which combination of Google Cloud Platform products should you recommend?

Options:

A.

Cloud Datastore and Cloud Bigtable


B.

Cloud Bigtable and Cloud SQL


C.

BigQuery and Cloud Bigtable


D.

BigQuery and Cloud Storage


Expert Solution
Questions # 2:

You need to compose visualization for operations teams with the following requirements:

Telemetry must include data from all 50,000 installations for the most recent 6 weeks (sampling once every minute)

The report must not be more than 3 hours delayed from live data.

The actionable report should only show suboptimal links.

Most suboptimal links should be sorted to the top.

Suboptimal links can be grouped and filtered by regional geography.

User response time to load the report must be <5 seconds.

You create a data source to store the last 6 weeks of data, and create visualizations that allow viewers to see multiple date ranges, distinct geographic regions, and unique installation types. You always show the latest data without any changes to your visualizations. You want to avoid creating and updating new visualizations each month. What should you do?

Options:

A.

Look through the current data and compose a series of charts and tables, one for each possiblecombination of criteria.


B.

Look through the current data and compose a small set of generalized charts and tables bound to criteria filters that allow value selection.


C.

Export the data to a spreadsheet, compose a series of charts and tables, one for each possiblecombination of criteria, and spread them across multiple tabs.


D.

Load the data into relational database tables, write a Google App Engine application that queries all rows, summarizes the data across each criteria, and then renders results using the Google Charts and visualization API.


Expert Solution
Questions # 3:

Given the record streams MJTelco is interested in ingesting per day, they are concerned about the cost of Google BigQuery increasing. MJTelco asks you to provide a design solution. They require a single large data table called tracking_table. Additionally, they want to minimize the cost of daily queries while performing fine-grained analysis of each day’s events. They also want to use streaming ingestion. What should you do?

Options:

A.

Create a table called tracking_table and include a DATE column.


B.

Create a partitioned table called tracking_table and include a TIMESTAMP column.


C.

Create sharded tables for each day following the pattern tracking_table_YYYYMMDD.


D.

Create a table called tracking_table with a TIMESTAMP column to represent the day.


Expert Solution
Questions # 4:

MJTelco needs you to create a schema in Google Bigtable that will allow for the historical analysis of the last 2 years of records. Each record that comes in is sent every 15 minutes, and contains a unique identifier of the device and a data record. The most common query is for all the data for a given device for a given day. Which schema should you use?

Options:

A.

Rowkey: date#device_idColumn data: data_point


B.

Rowkey: dateColumn data: device_id, data_point


C.

Rowkey: device_idColumn data: date, data_point


D.

Rowkey: data_pointColumn data: device_id, date


E.

Rowkey: date#data_pointColumn data: device_id


Expert Solution
Questions # 5:

MJTelco’s Google Cloud Dataflow pipeline is now ready to start receiving data from the 50,000 installations. You want to allow Cloud Dataflow to scale its compute power up as required. Which Cloud Dataflow pipeline configuration setting should you update?

Options:

A.

The zone


B.

The number of workers


C.

The disk size per worker


D.

The maximum number of workers


Expert Solution
Questions # 6:

You need to compose visualizations for operations teams with the following requirements:

Which approach meets the requirements?

Options:

A.

Load the data into Google Sheets, use formulas to calculate a metric, and use filters/sorting to show only suboptimal links in a table.


B.

Load the data into Google BigQuery tables, write Google Apps Script that queries the data, calculates the metric, and shows only suboptimal rows in a table in Google Sheets.


C.

Load the data into Google Cloud Datastore tables, write a Google App Engine Application that queries all rows, applies a function to derive the metric, and then renders results in a table using the Google charts and visualization API.


D.

Load the data into Google BigQuery tables, write a Google Data Studio 360 report that connects to your data, calculates a metric, and then uses a filter expression to show only suboptimal rows in a table.


Expert Solution
Questions # 7:

You work for a shipping company that uses handheld scanners to read shipping labels. Your company has strict data privacy standards that require scanners to only transmit recipients’ personally identifiable information (PII) to analytics systems, which violates user privacy rules. You want to quickly build a scalable solution using cloud-native managed services to prevent exposure of PII to the analytics systems. What should you do?

Options:

A.

Create an authorized view in BigQuery to restrict access to tables with sensitive data.


B.

Install a third-party data validation tool on Compute Engine virtual machines to check the incoming data for sensitive information.


C.

Use Stackdriver logging to analyze the data passed through the total pipeline to identify transactions that may contain sensitive information.


D.

Build a Cloud Function that reads the topics and makes a call to the Cloud Data Loss Prevention API. Use the tagging and confidence levels to either pass or quarantine the data in a bucket for review.


Expert Solution
Questions # 8:

You are developing a model to identify the factors that lead to sales conversions for your customers. You have completed processing your data. You want to continue through the model development lifecycle. What should you do next?

Options:

A.

Use your model to run predictions on fresh customer input data.


B.

Test and evaluate your model on your curated data to determine how well the model performs.


C.

Monitor your model performance, and make any adjustments needed.


D.

Delineate what data will be used for testing and what will be used for training the model.


Expert Solution
Questions # 9:

You use a dataset in BigQuery for analysis. You want to provide third-party companies with access to the same dataset. You need to keep the costs of data sharing low and ensure that the data is current. Which solution should you choose?

Options:

A.

Create an authorized view on the BigQuery table to control data access, and provide third-party companies with access to that view.


B.

Use Cloud Scheduler to export the data on a regular basis to Cloud Storage, and provide third-party companies with access to the bucket.


C.

Create a separate dataset in BigQuery that contains the relevant data to share, and provide third-party companies with access to the new dataset.


D.

Create a Cloud Dataflow job that reads the data in frequent time intervals, and writes it to the relevant BigQuery dataset or Cloud Storage bucket for third-party companies to use.


Expert Solution
Questions # 10:

You have a BigQuery dataset named "customers". All tables will be tagged by using a Data Catalog tag template named "gdpr". The template contains one mandatory field, "has sensitive data~. with a boolean value. All employees must be able to do a simple search and find tables in the dataset that have either true or false in the "has sensitive data" field. However, only the Human Resources (HR) group should be able to see the data inside the tables for which "hass-ensitive-data" is true. You give the all employees group the bigquery.metadataViewer and bigquery.connectionUser roles on the dataset. You want to minimize configuration overhead. What should you do next?

Options:

A.

Create the "gdpr" tag template with private visibility. Assign the bigquery -dataViewer role to the HR group on the tables that contain sensitive data.


B.

Create the ~gdpr" tag template with private visibility. Assign the datacatalog. tagTemplateViewer role on this tag to the all employeesgroup, and assign the bigquery.dataViewer role to the HR group on the tables that contain sensitive data.


C.

Create the "gdpr" tag template with public visibility. Assign the bigquery. dataViewer role to the HR group on the tables that containsensitive data.


D.

Create the "gdpr" tag template with public visibility. Assign the datacatalog. tagTemplateViewer role on this tag to the all employees.group, and assign the bijquery.dataViewer role to the HR group on the tables that contain sensitive data.


Expert Solution
Viewing page 1 out of 7 pages
Viewing questions 1-10 out of questions