Pass the Google Google Cloud Certified Professional-Cloud-Architect Questions and answers with CertsForce

Viewing page 6 out of 7 pages
Viewing questions 51-60 out of questions
Questions # 51:

TerramEarth has about 1 petabyte (PB) of vehicle testing data in a private data center. You want to move the data to Cloud Storage for your machine learning team. Currently, a 1-Gbps interconnect link is available for you. The machine learning team wants to start using the data in a month. What should you do?

Options:

A.

Request Transfer Appliances from Google Cloud, export the data to appliances, and return the appliances to Google Cloud.


B.

Configure the Storage Transfer service from Google Cloud to send the data from your data center to Cloud Storage


C.

Make sure there are no other users consuming the 1 Gbps link, and use multi-thread transfer to upload the data to Cloud Storage.


D.

Export files to an encrypted USB device, send the device to Google Cloud, and request an import of the data to Cloud Storage


Questions # 52:

You have broken down a legacy monolithic application into a few containerized RESTful microservices. You want to run those microservices on Cloud Run. You also want to make sure the services are highly available with low latency to your customers. What should you do?

Options:

A.

Deploy Cloud Run services to multiple availability zones. Create Cloud Endpoints that point to the services. Create a global HTIP(S) Load Balancing instance and attach the Cloud Endpoints to its backend.


B.

Deploy Cloud Run services to multiple regions Create serverless network endpoint groups pointing to the services. Add the serverless NE Gs to a backend service that is used by a global HTIP(S) Load Balancing instance.


C.

Cloud Run services to multiple regions. In Cloud DNS, create a latency-based DNS name that points to the services.


D.

Deploy Cloud Run services to multiple availability zones. Create a TCP/IP global load balancer. Add the Cloud Run Endpoints to its backend service.


Questions # 53:

For this question, refer to the TerramEarth case study. To be compliant with European GDPR regulation, TerramEarth is required to delete data generated from its European customers after a period of 36 months when it contains personal data. In the new architecture, this data will be stored in both Cloud Storage and BigQuery. What should you do?

Options:

A.

Create a BigQuery table for the European data, and set the table retention period to 36 months. For Cloud Storage, use gsutil to enable lifecycle management using a DELETE action with an Age condition of 36 months.


B.

Create a BigQuery table for the European data, and set the table retention period to 36 months. For Cloud Storage, use gsutil to create a SetStorageClass to NONE action when with an Age condition of 36 months.


C.

Create a BigQuery time-partitioned table for the European data, and set the partition expiration period to 36 months. For Cloud Storage, use gsutil to enable lifecycle management using a DELETE action with an Age condition of 36 months.


D.

Create a BigQuery time-partitioned table for the European data, and set the partition period to 36 months. For Cloud Storage, use gsutil to create a SetStorageClass to NONE action with an Age condition of 36 months.


Questions # 54:

For this question, refer to the TerramEarth case study. You need to implement a reliable, scalable GCP solution for the data warehouse for your company, TerramEarth. Considering the TerramEarth business and technical requirements, what should you do?

Options:

A.

Replace the existing data warehouse with BigQuery. Use table partitioning.


B.

Replace the existing data warehouse with a Compute Engine instance with 96 CPUs.


C.

Replace the existing data warehouse with BigQuery. Use federated data sources.


D.

Replace the existing data warehouse with a Compute Engine instance with 96 CPUs. Add an additional Compute Engine pre-emptible instance with 32 CPUs.


Questions # 55:

You want to create a private connection between your instances on Compute Engine and your on-premises data center. You require a connection of at least 20 Gbps. You want to follow Google-recommended practices.

How should you set up the connection?

Options:

A.

Create a VPC and connect it to your on-premises data center using Dedicated Interconnect.


B.

Create a VPC and connect it to your on-premises data center using a single Cloud VPN.


C.

Create a Cloud Content Delivery Network (Cloud CDN) and connect it to your on-premises data center

using Dedicated Interconnect.


D.

Create a Cloud Content Delivery Network (Cloud CDN) and connect it to your on-premises datacenter

using a single Cloud VPN.


Questions # 56:

Your company places a high value on being responsive and meeting customer needs quickly. Their primary business objectives are release speed and agility. You want to reduce the chance of security errors being accidentally introduced. Which two actions can you take? Choose 2 answers

Options:

A.

Ensure every code check-in is peer reviewed by a security SME.


B.

Use source code security analyzers as part of the CI/CD pipeline.


C.

Ensure you have stubs to unit test all interfaces between components.


D.

Enable code signing and a trusted binary repository integrated with your CI/CD pipeline.


E.

Run a vulnerability security scanner as part of your continuous-integration /continuous-delivery (CI/CD) pipeline.


Questions # 57:

You want to enable your running Google Kubernetes Engine cluster to scale as demand for your application changes.

What should you do?

Options:

A.

Add additional nodes to your Kubernetes Engine cluster using the following command:gcloud container clusters resizeCLUSTER_Name – -size 10


B.

Add a tag to the instances in the cluster with the following command:gcloud compute instances add-tagsINSTANCE - -tags enable-autoscaling max-nodes-10


C.

Update the existing Kubernetes Engine cluster with the following command:gcloud alpha container clustersupdate mycluster - -enable-autoscaling - -min-nodes=1 - -max-nodes=10


D.

Create a new Kubernetes Engine cluster with the following command:gcloud alpha container clusterscreate mycluster - -enable-autoscaling - -min-nodes=1 - -max-nodes=10and redeploy your application


Questions # 58:

Your web application uses Google Kubernetes Engine to manage several workloads. One workload requires a consistent set of hostnames even after pod scaling and relaunches.

Which feature of Kubernetes should you use to accomplish this?

Options:

A.

StatefulSets


B.

Role-based access control


C.

Container environment variables


D.

Persistent Volumes


Questions # 59:

Your company has an enterprise application running on Compute Engine that requires high availability and high performance. The application has been deployed on two instances in two zones in the same region m active passive mode. The application writes data to a persistent disk in the case of a single zone outage that data should be immediately made available to the other instance in the other zone. You want to maximize performance while minimizing downtime and data loss. What should you do?

Options:

A.

1. Attach a persistent SSD disk to the first instance

2. Create a snapshot every hour

3. In case of a zone outage, recreate a persistent SSD disk in the second instance where data is coming from the created snapshot


B.

1 Create a Cloud Storage bucket

2. Mount the bucket into the first instance with gcs-fuse

3. In case of a zone outage, mount the Cloud Storage bucket to the second instance with gcs-fuse


C.

1 Attach a local SSD lo the first instance disk

2. Execute an rsync command every hour where the target is a persistent SSD disk attached to the second instance

3. In case of a zone outage, use the second instance


D.

1. Attach a regional SSD persistent Ask to the first instance

2. In case of a zone outage, force-attach the disk to the other instance


Questions # 60:

Your company is designing its application landscape on Compute Engine. Whenever a zonal outage occurs, the application should be restored in another zone as quickly as possible with the latest application data. You need to design the solution to meet this requirement. What should you do?

Options:

A.

Create a snapshot schedule for the disk containing the application data. Whenever a zonal outage occurs, use the latest snapshot to restore the disk in the same zone.


B.

Configure the Compute Engine instances with an instance template for the application, and use a regional persistent disk for the application data. Whenever a zonal outage occurs, use the instance template to spin up the application in another zone in the same region. Use the regional persistent disk for the application data.


C.

Create a snapshot schedule for the disk containing the application data. Whenever a zonal outage occurs, use the latest snapshot to restore the disk in another zone within the same region.


D.

Configure the Compute Engine instances with an instance template for the application, and use a regional persistent disk for the application data. Whenever a zonal outage occurs, use the instance template to spin up the application in another region. Use the regional persistent disk for the application data,


Viewing page 6 out of 7 pages
Viewing questions 51-60 out of questions