Pass the Google Google Cloud Platform Associate-Data-Practitioner Questions and answers with CertsForce

Viewing page 1 out of 4 pages
Viewing questions 1-10 out of questions
Questions # 1:

Your team uses the Google Ads platform to visualize metrics. You want to export the data to BigQuery to get more granular insights. You need to execute a one-time transfer of historical data and automatically update data daily. You want a solution that is low-code, serverless, and requires minimal maintenance. What should you do?

Options:

A.

Export the historical data to BigQuery by using BigQuery Data Transfer Service. Use Cloud Composer for daily automation.


B.

Export the historical data to Cloud Storage by using Storage Transfer Service. Use Pub/Sub to trigger a Dataflow template that loads data for daily automation.


C.

Export the historical data as a CSV file. Import the file into BigQuery for analysis. Use Cloud Composer for daily automation.


D.

Export the historical data to BigQuery by using BigQuery Data Transfer Service. Use BigQuery Data Transfer Service for daily automation.


Questions # 2:

Your company has an on-premises file server with 5 TB of data that needs to be migrated to Google Cloud. The network operations team has mandated that you can only use up to 250 Mbps of the total available bandwidth for the migration. You need to perform an online migration to Cloud Storage. What should you do?

Options:

A.

Use Storage Transfer Service to configure an agent-based transfer. Set the appropriate bandwidth limit for the agent pool.


B.

Use the gcloud storage cp command to copy all files from on-premises to Cloud Storage using the --daisy-chain option.


C.

Request a Transfer Appliance, copy the data to the appliance, and ship it back to Google Cloud.


D.

Use the gcloud storage cp command to copy all files from on-premises to Cloud Storage using the --no-clobber option.


Questions # 3:

You created a customer support application that sends several forms of data to Google Cloud. Your application is sending:

1. Audio files from phone interactions with support agents that will be accessed during trainings.

2. CSV files of users’ personally identifiable information (Pll) that will be analyzed with SQL.

3. A large volume of small document files that will power other applications.

You need to select the appropriate tool for each data type given the required use case, while following Google-recommended practices. Which should you choose?

Options:

A.

1. Cloud Storage

2. CloudSQL for PostgreSQL

3. Bigtable


B.

1. Filestore

2. Cloud SQL for PostgreSQL

3. Datastore


C.

1. Cloud Storage

2. BigQuery

3. Firestore


D.

1. Filestore

2. Bigtable

3. BigQuery


Questions # 4:

Your company has developed a website that allows users to upload and share video files. These files are most frequently accessed and shared when they are initially uploaded. Over time, the files are accessed and shared less frequently, although some old video files may remain very popular. You need to design a storage system that is simple and cost-effective. What should you do?

Options:

A.

Create a single-region bucket with custom Object Lifecycle Management policies based on upload date.


B.

Create a single-region bucket with Autoclass enabled.


C.

Create a single-region bucket. Configure a Cloud Scheduler job that runs every 24 hours and changes the storage class based on upload date.


D.

Create a single-region bucket with Archive as the default storage class.


Questions # 5:

You manage data at an ecommerce company. You have a Dataflow pipeline that processes order data from Pub/Sub, enriches the data with product information from Bigtable, and writes the processed data to BigQuery for analysis. The pipeline runs continuously and processes thousands of orders every minute. You need to monitor the pipeline's performance and be alerted if errors occur. What should you do?

Options:

A.

Use Cloud Monitoring to track key metrics. Create alerting policies in Cloud Monitoring to trigger notifications when metrics exceed thresholds or when errors occur.


B.

Use the Dataflow job monitoring interface to visually inspect the pipeline graph, check for errors, and configure notifications when critical errors occur.


C.

Use BigQuery to analyze the processed data in Cloud Storage and identify anomalies or inconsistencies. Set up scheduled alerts based when anomalies or inconsistencies occur.


D.

Use Cloud Logging to view the pipeline logs and check for errors. Set up alerts based on specific keywords in the logs.


Questions # 6:

You have an existing weekly Storage Transfer Service transfer job from Amazon S3 to a Nearline Cloud Storage bucket in Google Cloud. Each week, the job moves a large number of relatively small files. As the number of files to be transferred each week has grown over time, you are at risk of no longer completing the transfer in the allocated time frame. You need to decrease the total transfer time by replacing the process. Your solution should minimize costs where possible. What should you do?

Options:

A.

Create a transfer job using the Google Cloud CLI, and specify the Standard storage class with the —custom-storage-class flag.


B.

Create parallel transfer jobs using include and exclude prefixes.


C.

Create a batch Dataflow job that is scheduled weekly to migrate the data from Amazon S3 to Cloud Storage.


D.

Create an agent-based transfer job that utilizes multiple transfer agents on Compute Engine instances.


Questions # 7:

You need to create a weekly aggregated sales report based on a large volume of data. You want to use Python to design an efficient process for generating this report. What should you do?

Options:

A.

Create a Cloud Run function that uses NumPy. Use Cloud Scheduler to schedule the function to run once a week.


B.

Create a Colab Enterprise notebook and use the bigframes.pandas library. Schedule the notebook to execute once a week.


C.

Create a Cloud Data Fusion and Wrangler flow. Schedule the flow to run once a week.


D.

Create a Dataflow directed acyclic graph (DAG) coded in Python. Use Cloud Scheduler to schedule the code to run once a week.


Questions # 8:

Your organization has several datasets in their data warehouse in BigQuery. Several analyst teams in different departments use the datasets to run queries. Your organization is concerned about the variability of their monthly BigQuery costs. You need to identify a solution that creates a fixed budget for costs associated with the queries run by each department. What should you do?

Options:

A.

Create a custom quota for each analyst in BigQuery.


B.

Create a single reservation by using BigQuery editions. Assign all analysts to the reservation.


C.

Assign each analyst to a separate project associated with their department. Create a single reservation by using BigQuery editions. Assign all projects to the reservation.


D.

Assign each analyst to a separate project associated with their department. Create a single reservation for each department by using BigQuery editions. Create assignments for each project in the appropriate reservation.


Questions # 9:

Your organization's website uses an on-premises MySQL as a backend database. You need to migrate the on-premises MySQL database to Google Cloud while maintaining MySQL features. You want to minimize administrative overhead and downtime. What should you do?

Options:

A.

Install MySQL on a Compute Engine virtual machine. Export the database files using the mysqldump command. Upload the files to Cloud Storage, and import them into the MySQL instance on Compute Engine.


B.

Use Database Migration Service to transfer the data to Cloud SQL for MySQL, and configure the on premises MySQL database as the source.


C.

Use a Google-provided Dataflow template to replicate the MySQL database in BigQuery.


D.

Export the database tables to CSV files, and upload the files to Cloud Storage. Convert the MySQL schema to a Spanner schema, create a JSON manifest file, and run a Google-provided Dataflow template to load the data into Spanner.


Questions # 10:

Your company uses Looker as its primary business intelligence platform. You want to use LookML to visualize the profit margin for each of your company's products in your Looker Explores and dashboards. You need to implement a solution quickly and efficiently. What should you do?

Options:

A.

Apply a filter to only show products with a positive profit margin.


B.

Define a new measure that calculates the profit margin by using the existing revenue and cost fields.


C.

Create a new dimension that categorizes products based on their profit margin ranges (e.g., high, medium, low).


D.

Create a derived table that pre-calculates the profit margin for each product, and include it in the Looker model.


Viewing page 1 out of 4 pages
Viewing questions 1-10 out of questions