Pass the Google Google Cloud Certified Professional-Cloud-Architect Questions and answers with CertsForce

Viewing page 5 out of 7 pages
Viewing questions 41-50 out of questions
Questions # 41:

For this question, refer to the Helicopter Racing League (HRL) case study. Recently HRL started a new regional

racing league in Cape Town, South Africa. In an effort to give customers in Cape Town a better user

experience, HRL has partnered with the Content Delivery Network provider, Fastly. HRL needs to allow traffic

coming from all of the Fastly IP address ranges into their Virtual Private Cloud network (VPC network). You are

a member of the HRL security team and you need to configure the update that will allow only the Fastly IP

address ranges through the External HTTP(S) load balancer. Which command should you use?

Options:

A.

Apply a Cloud Armor security policy to external load balancers using a named IP list for Fastly.


B.

Apply a Cloud Armor security policy to external load balancers using the IP addresses that Fastly has published. C. Apply a VPC firewall rule on port 443 for Fastly IP address ranges.


C.

Apply a VPC firewall rule on port 443 for network resources tagged with scurceiplisr-fasrly.


Questions # 42:

For this question, refer to the Helicopter Racing League (HRL) case study. HRL is looking for a cost-effective

approach for storing their race data such as telemetry. They want to keep all historical records, train models

using only the previous season's data, and plan for data growth in terms of volume and information collected.

You need to propose a data solution. Considering HRL business requirements and the goals expressed by

CEO S. Hawke, what should you do?

Options:

A.

Use Firestore for its scalable and flexible document-based database. Use collections to aggregate race data

by season and event.


B.

Use Cloud Spanner for its scalability and ability to version schemas with zero downtime. Split race data

using season as a primary key.


C.

Use BigQuery for its scalability and ability to add columns to a schema. Partition race data based on

season.


D.

Use Cloud SQL for its ability to automatically manage storage increases and compatibility with MySQL. Use

separate database instances for each season.


Questions # 43:

Dress4win has end to end tests covering 100% of their endpoints.

They want to ensure that the move of cloud does not introduce any new bugs.

Which additional testing methods should the developers employ to prevent an outage?

Options:

A.

They should run the end to end tests in the cloud staging environment to determine if the code is working as

intended.


B.

They should enable google stack driver debugger on the application code to show errors in the code


C.

They should add additional unit tests and production scale load tests on their cloud staging environment.


D.

They should add canary tests so developers can measure how much of an impact the new release causes to latency


Questions # 44:

For this question, refer to the Dress4Win case study.

Dress4Win has configured a new uptime check with Google Stackdriver for several of their legacy services. The Stackdriver dashboard is not reporting the services as healthy. What should they do?

Options:

A.

Install the Stackdriver agent on all of the legacy web servers.


B.

In the Cloud Platform Console download the list of the uptime servers' IP addresses and create an inbound firewall rule


C.

Configure their load balancer to pass through the User-Agent HTTP header when the value matches GoogleStackdriverMonitoring-UptimeChecks (https://cloud.google.com/monitoring)


D.

Configure their legacy web servers to allow requests that contain user-Agent HTTP header when the value matches GoogleStackdriverMonitoring— UptimeChecks (https://cloud.google.com/monitoring)


Questions # 45:

For this question, refer to the Dress4Win case study.

Dress4Win has end-to-end tests covering 100% of their endpoints. They want to ensure that the move to the cloud does not introduce any new bugs. Which additional testing methods should the developers employ to prevent an outage?

Options:

A.

They should enable Google Stackdriver Debugger on the application code to show errors in the code.


B.

They should add additional unit tests and production scale load tests on their cloud staging environment.


C.

They should run the end-to-end tests in the cloud staging environment to determine if the code is working as intended.


D.

They should add canary tests so developers can measure how much of an impact the new release causes to latency.


Questions # 46:

For this question, refer to the TerramEarth case study. TerramEarth has decided to store data files in Cloud Storage. You need to configure Cloud Storage lifecycle rule to store 1 year of data and minimize file storage cost.

Which two actions should you take?

Options:

A.

Create a Cloud Storage lifecycle rule with Age: “30”, Storage Class: “Standard”, and Action: “Set to Coldline”, and create a second GCS life-cycle rule with Age: “365”, Storage Class: “Coldline”, and Action: “Delete”.


B.

Create a Cloud Storage lifecycle rule with Age: “30”, Storage Class: “Coldline”, and Action: “Set to Nearline”, and create a second GCS life-cycle rule with Age: “91”, Storage Class: “Coldline”, and Action: “Set to Nearline”.


C.

Create a Cloud Storage lifecycle rule with Age: “90”, Storage Class: “Standard”, and Action: “Set to Nearline”, and create a second GCS life-cycle rule with Age: “91”, Storage Class: “Nearline”, and Action: “Set to Coldline”.


D.

Create a Cloud Storage lifecycle rule with Age: “30”, Storage Class: “Standard”, and Action: “Set to Coldline”, and create a second GCS life-cycle rule with Age: “365”, Storage Class: “Nearline”, and Action: “Delete”.


Questions # 47:

For this question, refer to the TerramEarth case study. A new architecture that writes all incoming data to

BigQuery has been introduced. You notice that the data is dirty, and want to ensure data quality on an

automated daily basis while managing cost.

What should you do?

Options:

A.

Set up a streaming Cloud Dataflow job, receiving data by the ingestion process. Clean the data in a Cloud Dataflow pipeline.


B.

Create a Cloud Function that reads data from BigQuery and cleans it. Trigger it. Trigger the Cloud Function from a Compute Engine instance.


C.

Create a SQL statement on the data in BigQuery, and save it as a view. Run the view daily, and save the result to a new table.


D.

Use Cloud Dataprep and configure the BigQuery tables as the source. Schedule a daily job to clean the data.


Questions # 48:

TerramEarth has a legacy web application that you cannot migrate to cloud. However, you still want to build a cloud-native way to monitor the application. If the application goes down, you want the URL to point to a "Site is unavailable" page as soon as possible. You also want your Ops team to receive a notification for the issue. You need to build a reliable solution for minimum cost

What should you do?

Options:

A.

Create a scheduled job in Cloud Run to invoke a container every minute. The container will check the application URL If the application is down, switch the URL to the "Site is unavailable" page, and notify the Ops team.


B.

Create a cron job on a Compute Engine VM that runs every minute. The cron job invokes a Python program to check the application URL If the application is down, switch the URL to the "Site is unavailable" page, and notify the Ops team.


C.

Create a Cloud Monitoring uptime check to validate the application URL If it fails, put a message in a Pub/Sub queue that triggers a Cloud Function to switch the URL to the "Site is unavailable" page, and notify the Ops team.


D.

Use Cloud Error Reporting to check the application URL If the application is down, switch the URL to the "Site is unavailable" page, and notify the Ops team.


Questions # 49:

For this question, refer to the TerramEarth case study. Considering the technical requirements, how should you reduce the unplanned vehicle downtime in GCP?

Options:

A.

Use BigQuery as the data warehouse. Connect all vehicles to the network and stream data into BigQuery using Cloud Pub/Sub and Cloud Dataflow. Use Google Data Studio for analysis and reporting.


B.

Use BigQuery as the data warehouse. Connect all vehicles to the network and upload gzip files to a Multi-Regional Cloud Storage bucket using gcloud. Use Google Data Studio for analysis and reporting.


C.

Use Cloud Dataproc Hive as the data warehouse. Upload gzip files to a MultiRegional Cloud Storage

bucket. Upload this data into BigQuery using gcloud. Use Google data Studio for analysis and reporting.


D.

Use Cloud Dataproc Hive as the data warehouse. Directly stream data into prtitioned Hive tables. Use Pig scripts to analyze data.


Questions # 50:

For this question, refer to the TerramEarth case study. You are asked to design a new architecture for the

ingestion of the data of the 200,000 vehicles that are connected to a cellular network. You want to follow

Google-recommended practices.

Considering the technical requirements, which components should you use for the ingestion of the data?

Options:

A.

Google Kubernetes Engine with an SSL Ingress


B.

Cloud IoT Core with public/private key pairs


C.

Compute Engine with project-wide SSH keys


D.

Compute Engine with specific SSH keys


Viewing page 5 out of 7 pages
Viewing questions 41-50 out of questions