Pre-Winter Sale Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: pass65

Pass the Google Google Cloud Certified Professional-Data-Engineer Questions and answers with CertsForce

Viewing page 6 out of 6 pages
Viewing questions 51-60 out of questions
Questions # 51:

You are building a model to predict whether or not it will rain on a given day. You have thousands of input features and want to see if you can improve training speed by removing some features while having a minimum effect on model accuracy. What can you do?

Options:

A.

Eliminate features that are highly correlated to the output labels.


B.

Combine highly co-dependent features into one representative feature.


C.

Instead of feeding in each feature individually, average their values in batches of 3.


D.

Remove the features that have null values for more than 50% of the training records.


Expert Solution
Questions # 52:

Your software uses a simple JSON format for all messages. These messages are published to Google Cloud Pub/Sub, then processed with Google Cloud Dataflow to create a real-time dashboard for the CFO. During testing, you notice that some messages are missing in thedashboard. You check the logs, and all messages are being published to Cloud Pub/Sub successfully. What should you do next?

Options:

A.

Check the dashboard application to see if it is not displaying correctly.


B.

Run a fixed dataset through the Cloud Dataflow pipeline and analyze the output.


C.

Use Google Stackdriver Monitoring on Cloud Pub/Sub to find the missing messages.


D.

Switch Cloud Dataflow to pull messages from Cloud Pub/Sub instead of Cloud Pub/Sub pushing messages to Cloud Dataflow.


Expert Solution
Questions # 53:

You have Google Cloud Dataflow streaming pipeline running with a Google Cloud Pub/Sub subscription as the source. You need to make an update to the code that will make the new Cloud Dataflow pipeline incompatible with the current version. You do not want to lose any data when making this update. What should you do?

Options:

A.

Update the current pipeline and use the drain flag.


B.

Update the current pipeline and provide the transform mapping JSON object.


C.

Create a new pipeline that has the same Cloud Pub/Sub subscription and cancel the old pipeline.


D.

Create a new pipeline that has a new Cloud Pub/Sub subscription and cancel the old pipeline.


Expert Solution
Questions # 54:

You are deploying 10,000 new Internet of Things devices to collect temperature data in your warehouses globally. You need to process, store and analyze these very large datasets in real time. What should you do?

Options:

A.

Send the data to Google Cloud Datastore and then export to BigQuery.


B.

Send the data to Google Cloud Pub/Sub, stream Cloud Pub/Sub to Google Cloud Dataflow, and store the data in Google BigQuery.


C.

Send the data to Cloud Storage and then spin up an Apache Hadoop cluster as needed in Google Cloud Dataproc whenever analysis is required.


D.

Export logs in batch to Google Cloud Storage and then spin up a Google Cloud SQL instance, import the data from Cloud Storage, and run an analysis as needed.


Expert Solution
Questions # 55:

Given the record streams MJTelco is interested in ingesting per day, they are concerned about the cost of Google BigQuery increasing. MJTelco asks you to provide a design solution. They require a single large data table called tracking_table. Additionally, they want to minimize the cost of daily queries while performing fine-grained analysis of each day’s events. They also want to use streaming ingestion. What should you do?

Options:

A.

Create a table called tracking_table and include a DATE column.


B.

Create a partitioned table called tracking_table and include a TIMESTAMP column.


C.

Create sharded tables for each day following the pattern tracking_table_YYYYMMDD.


D.

Create a table called tracking_table with a TIMESTAMP column to represent the day.


Expert Solution
Questions # 56:

Your company handles data processing for a number of different clients. Each client prefers to use their own suite of analytics tools, with some allowing direct query access via Google BigQuery. You need to secure the data so that clients cannot see each other’s data. You want to ensure appropriate access to the data. Which three steps should you take? (Choose three.)

Options:

A.

Load data into different partitions.


B.

Load data into a different dataset for each client.


C.

Put each client’s BigQuery dataset into a different table.


D.

Restrict a client’s dataset to approved users.


E.

Only allow a service account to access the datasets.


F.

Use the appropriate identity and access management (IAM) roles for each client’s users.


Expert Solution
Questions # 57:

Your company is migrating their 30-node Apache Hadoop cluster to the cloud. They want to re-use Hadoop jobs they have already created and minimize the management of the cluster as much as possible. They also want to be able to persist data beyond the life of the cluster. What should you do?

Options:

A.

Create a Google Cloud Dataflow job to process the data.


B.

Create a Google Cloud Dataproc cluster that uses persistent disks for HDFS.


C.

Create a Hadoop cluster on Google Compute Engine that uses persistent disks.


D.

Create a Cloud Dataproc cluster that uses the Google Cloud Storage connector.


E.

Create a Hadoop cluster on Google Compute Engine that uses Local SSD disks.


Expert Solution
Questions # 58:

Your company is using WHILECARD tables to query data across multiple tables with similar names. The SQL statement is currently failing with the following error:

# Syntax error : Expected end of statement but got “-“ at [4:11]

SELECT age

FROM

bigquery-public-data.noaa_gsod.gsod

WHERE

age != 99

AND_TABLE_SUFFIX = ‘1929’

ORDER BY

age DESC

Which table name will make the SQL statement work correctly?

Options:

A.

‘bigquery-public-data.noaa_gsod.gsod‘


B.

bigquery-public-data.noaa_gsod.gsod*


C.

‘bigquery-public-data.noaa_gsod.gsod’*


D.

‘bigquery-public-data.noaa_gsod.gsod*`


Expert Solution
Questions # 59:

You are a retailer that wants to integrate your online sales capabilities with different in-home assistants, such as Google Home. You need to interpret customer voice commands and issue an order to the backend systems. Which solutions should you choose?

Options:

A.

Cloud Speech-to-Text API


B.

Cloud Natural Language API


C.

Dialogflow Enterprise Edition


D.

Cloud AutoML Natural Language


Expert Solution
Questions # 60:

Your company has hired a new data scientist who wants to perform complicated analyses across very large datasets stored in Google Cloud Storage and in a Cassandra cluster on Google Compute Engine. The scientist primarily wants to create labelled data sets for machine learning projects, along with some visualization tasks. She reports that her laptop is not powerful enough to perform her tasks and it is slowing her down. You want to help her perform her tasks. What should you do?

Options:

A.

Run a local version of Jupiter on the laptop.


B.

Grant the user access to Google Cloud Shell.


C.

Host a visualization tool on a VM on Google Compute Engine.


D.

Deploy Google Cloud Datalab to a virtual machine (VM) on Google Compute Engine.


Expert Solution
Viewing page 6 out of 6 pages
Viewing questions 51-60 out of questions