Pass the Google Google Cloud Certified Professional-Data-Engineer Questions and answers with CertsForce

Viewing page 5 out of 7 pages
Viewing questions 41-50 out of questions
Questions # 41:

Which of the following statements about the Wide & Deep Learning model are true? (Select 2 answers.)

Options:

A.

The wide model is used for memorization, while the deep model is used for generalization.


B.

A good use for the wide and deep model is a recommender system.


C.

The wide model is used for generalization, while the deep model is used for memorization.


D.

A good use for the wide and deep model is a small-scale linear regression problem.


Expert Solution
Questions # 42:

If a dataset contains rows with individual people and columns for year of birth, country, and income, how many of the columns are continuous and how many are categorical?

Options:

A.

1 continuous and 2 categorical


B.

3 categorical


C.

3 continuous


D.

2 continuous and 1 categorical


Expert Solution
Questions # 43:

Which of the following is NOT a valid use case to select HDD (hard disk drives) as the storage for Google Cloud Bigtable?

Options:

A.

You expect to store at least 10 TB of data.


B.

You will mostly run batch workloads with scans and writes, rather than frequently executing random reads of a small number of rows.


C.

You need to integrate with Google BigQuery.


D.

You will not use the data to back a user-facing or latency-sensitive application.


Expert Solution
Questions # 44:

Your company’s on-premises Apache Hadoop servers are approaching end-of-life, and IT has decided to migrate the cluster to Google Cloud Dataproc. A like-for-like migration of the cluster would require 50 TB of Google Persistent Disk per node. The CIO is concerned about the cost of using that much block storage. You want to minimize the storage cost of the migration. What should you do?

Options:

A.

Put the data into Google Cloud Storage.


B.

Use preemptible virtual machines (VMs) for the Cloud Dataproc cluster.


C.

Tune the Cloud Dataproc cluster so that there is just enough disk for all data.


D.

Migrate some of the cold data into Google Cloud Storage, and keep only the hot data in Persistent Disk.


Expert Solution
Questions # 45:

What is the recommended action to do in order to switch between SSD and HDD storage for your Google Cloud Bigtable instance?

Options:

A.

create a third instance and sync the data from the two storage types via batch jobs


B.

export the data from the existing instance and import the data into a new instance


C.

run parallel instances where one is HDD and the other is SDD


D.

the selection is final and you must resume using the same storage type


Expert Solution
Questions # 46:

Your company is in a highly regulated industry. One of your requirements is to ensure individual users have access only to the minimum amount of information required to do their jobs. You want to enforce this requirement with Google BigQuery. Which three approaches can you take? (Choose three.)

Options:

A.

Disable writes to certain tables.


B.

Restrict access to tables by role.


C.

Ensure that the data is encrypted at all times.


D.

Restrict BigQuery API access to approved users.


E.

Segregate data across multiple tables or databases.


F.

Use Google Stackdriver Audit Logging to determine policy violations.


Expert Solution
Questions # 47:

Your company has hired a new data scientist who wants to perform complicated analyses across very large datasets stored in Google Cloud Storage and in a Cassandra cluster on Google Compute Engine. The scientist primarily wants to create labelled data sets for machine learning projects, along with some visualization tasks. She reports that her laptop is not powerful enough to perform her tasks and it is slowing her down. You want to help her perform her tasks. What should you do?

Options:

A.

Run a local version of Jupiter on the laptop.


B.

Grant the user access to Google Cloud Shell.


C.

Host a visualization tool on a VM on Google Compute Engine.


D.

Deploy Google Cloud Datalab to a virtual machine (VM) on Google Compute Engine.


Expert Solution
Questions # 48:

Your company uses a proprietary system to send inventory data every 6 hours to a data ingestion service in the cloud. Transmitted data includes a payload of several fields and the timestamp of the transmission. If there are any concerns about a transmission, the system re-transmits the data. How should you deduplicate the data most efficiency?

Options:

A.

Assign global unique identifiers (GUID) to each data entry.


B.

Compute the hash value of each data entry, and compare it with all historical data.


C.

Store each data entry as the primary key in a separate database and apply an index.


D.

Maintain a database table to store the hash value and other metadata for each data entry.


Expert Solution
Questions # 49:

Your startup has never implemented a formal security policy. Currently, everyone in the company has access to the datasets stored in Google BigQuery. Teams have freedom to use the service as they see fit, and they have not documented their use cases. You have been asked to secure the data warehouse. You need to discover what everyone is doing. What should you do first?

Options:

A.

Use Google Stackdriver Audit Logs to review data access.


B.

Get the identity and access management IIAM) policy of each table


C.

Use Stackdriver Monitoring to see the usage of BigQuery query slots.


D.

Use the Google Cloud Billing API to see what account the warehouse is being billed to.


Expert Solution
Questions # 50:

You want to process payment transactions in a point-of-sale application that will run on Google Cloud Platform. Your user base could grow exponentially, but you do not want to manage infrastructure scaling.

Which Google database service should you use?

Options:

A.

Cloud SQL


B.

BigQuery


C.

Cloud Bigtable


D.

Cloud Datastore


Expert Solution
Viewing page 5 out of 7 pages
Viewing questions 41-50 out of questions