Spring Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: simple70

Pass the Google Google Cloud Certified Professional-Data-Engineer Questions and answers with CertsForce

Viewing page 5 out of 8 pages
Viewing questions 41-50 out of questions
Questions # 41:

Which Java SDK class can you use to run your Dataflow programs locally?

Options:

A.

LocalRunner


B.

DirectPipelineRunner


C.

MachineRunner


D.

LocalPipelineRunner


Expert Solution
Questions # 42:

The CUSTOM tier for Cloud Machine Learning Engine allows you to specify the number of which types of cluster nodes?

Options:

A.

Workers


B.

Masters, workers, and parameter servers


C.

Workers and parameter servers


D.

Parameter servers


Expert Solution
Questions # 43:

You want to encrypt the customer data stored in BigQuery. You need to implement for-user crypto-deletion on data stored in your tables. You want to adopt native features in Google Cloud to avoid custom solutions. What should you do?

Options:

A.

Create a customer-managed encryption key (CMEK) in Cloud KMS. Associate the key to the table while creating the table.


B.

Create a customer-managed encryption key (CMEK) in Cloud KMS. Use the key to encrypt data before storing in BigQuery.


C.

Implement Authenticated Encryption with Associated Data (AEAD) BigQuery functions while storing your data in BigQuery.


D.

Encrypt your data during ingestion by using a cryptographic library supported by your ETL pipeline.


Expert Solution
Questions # 44:

You have an Oracle database deployed in a VM as part of a Virtual Private Cloud (VPC) network. You want to replicate and continuously synchronize 50 tables to BigQuery. You want to minimize the need to manage infrastructure. What should you do?

Options:

A.

Create a Datastream service from Oracle to BigQuery, use a private connectivity configuration to the same VPC network, and a connection profile to BigQuery.


B.

Create a Pub/Sub subscription to write to BigQuery directly Deploy the Debezium Oracle connector to capture changes in the Oracle database, and sink to the Pub/Sub topic.


C.

Deploy Apache Kafka in the same VPC network, use Kafka Connect Oracle Change Data Capture (CDC), and Dataflow to stream the Kafka topic to BigQuery.


D.

Deploy Apache Kafka in the same VPC network, use Kafka Connect Oracle change data capture (CDC), and the Kafka Connect Google BigQuery Sink Connector.


Expert Solution
Questions # 45:

You are migrating a table to BigQuery and are deeding on the data model. Your table stores information related to purchases made across several store locations and includes information like the time of the transaction, items purchased, the store ID and the city and state in which the store is located You frequently query this table to see how many of each item were sold over the past 30 days and to look at purchasing trends by state city and individual store. You want to model this table to minimize query time and cost. What should you do?

Options:

A.

Partition by transaction time; cluster by state first, then city then store ID


B.

Partition by transaction tome cluster by store ID first, then city, then stale


C.

Top-level cluster by stale first, then city then store


D.

Top-level cluster by store ID first, then city then state.


Expert Solution
Questions # 46:

What are two of the characteristics of using online prediction rather than batch prediction?

Options:

A.

It is optimized to handle a high volume of data instances in a job and to run more complex models.


B.

Predictions are returned in the response message.


C.

Predictions are written to output files in a Cloud Storage location that you specify.


D.

It is optimized to minimize the latency of serving predictions.


Expert Solution
Questions # 47:

Which of the following statements about the Wide & Deep Learning model are true? (Select 2 answers.)

Options:

A.

The wide model is used for memorization, while the deep model is used for generalization.


B.

A good use for the wide and deep model is a recommender system.


C.

The wide model is used for generalization, while the deep model is used for memorization.


D.

A good use for the wide and deep model is a small-scale linear regression problem.


Expert Solution
Questions # 48:

Which action can a Cloud Dataproc Viewer perform?

Options:

A.

Submit a job.


B.

Create a cluster.


C.

Delete a cluster.


D.

List the jobs.


Expert Solution
Questions # 49:

When a Cloud Bigtable node fails, ____ is lost.

Options:

A.

all data


B.

no data


C.

the last transaction


D.

the time dimension


Expert Solution
Questions # 50:

Google Cloud Bigtable indexes a single value in each row. This value is called the _______.

Options:

A.

primary key


B.

unique key


C.

row key


D.

master key


Expert Solution
Viewing page 5 out of 8 pages
Viewing questions 41-50 out of questions