Pass the Google Cloud DevOps Engineer Professional-Cloud-DevOps-Engineer Questions and answers with CertsForce

Viewing page 2 out of 6 pages
Viewing questions 11-20 out of questions
Questions # 11:

You are developing a strategy for monitoring your Google Cloud Platform (GCP) projects in production using Stackdriver Workspaces. One of the requirements is to be able to quickly identify and react to production environment issues without false alerts from development and staging projects. You want to ensure that you adhere to the principle of least privilege when providing relevant team members with access to Stackdriver Workspaces. What should you do?

Options:

A.

Grant relevant team members read access to all GCP production projects. Create Stackdriver workspaces inside each project.


B.

Grant relevant team members the Project Viewer IAM role on all GCP production projects. Create Slackdriver workspaces inside each project.


C.

Choose an existing GCP production project to host the monitoring workspace. Attach the production projects to this workspace. Grant relevant team members read access to the Stackdriver Workspace.


D.

Create a new GCP monitoring project, and create a Stackdriver Workspace inside it. Attach the production projects to this workspace. Grant relevant team members read access to the Stackdriver Workspace.


Questions # 12:

You are managing the production deployment to a set of Google Kubernetes Engine (GKE) clusters. You want to make sure only images which are successfully built by your trusted CI/CD pipeline are deployed to production. What should you do?

Options:

A.

Enable Cloud Security Scanner on the clusters.


B.

Enable Vulnerability Analysis on the Container Registry.


C.

Set up the Kubernetes Engine clusters as private clusters.


D.

Set up the Kubernetes Engine clusters with Binary Authorization.


Questions # 13:

You are working with a government agency that requires you to archive application logs for seven years. You need to configure Stackdriver to export and store the logs while minimizing costs of storage. What should you do?

Options:

A.

Create a Cloud Storage bucket and develop your application to send logs directly to the bucket.


B.

Develop an App Engine application that pulls the logs from Stackdriver and saves them in BigQuery.


C.

Create an export in Stackdriver and configure Cloud Pub/Sub to store logs in permanent storage for seven years.


D.

Create a sink in Stackdriver, name it, create a bucket on Cloud Storage for storing archived logs, and then select the bucket as the log export destination.


Questions # 14:

Your company is developing applications that are deployed on Google Kubernetes Engine (GKE). Each team manages a different application. You need to create the development and production environments for each team, while minimizing costs. Different teams should not be able to access other teams’ environments. What should you do?

Options:

A.

Create one GCP Project per team. In each project, create a cluster for Development and one for Production. Grant the teams IAM access to their respective clusters.


B.

Create one GCP Project per team. In each project, create a cluster with a Kubernetes namespace for Development and one for Production. Grant the teams IAM access to their respective clusters.


C.

Create a Development and a Production GKE cluster in separate projects. In each cluster, create a Kubernetes namespace per team, and then configure Identity Aware Proxy so that each team can only access its own namespace.


D.

Create a Development and a Production GKE cluster in separate projects. In each cluster, create a Kubernetes namespace per team, and then configure Kubernetes Role-based access control (RBAC) so that each team can only access its own namespace.


Questions # 15:

Your team has recently deployed an NGINX-based application into Google Kubernetes Engine (GKE) and has exposed it to the public via an HTTP Google Cloud Load Balancer (GCLB) ingress. You want to scale the deployment of the application's frontend using an appropriate Service Level Indicator (SLI). What should you do?

Options:

A.

Configure the horizontal pod autoscaler to use the average response time from the Liveness and Readiness probes.


B.

Configure the vertical pod autoscaler in GKE and enable the cluster autoscaler to scale the cluster as pods expand.


C.

Install the Stackdriver custom metrics adapter and configure a horizontal pod autoscaler to use the number of requests provided by the GCLB.


D.

Expose the NGINX stats endpoint and configure the horizontal pod autoscaler to use the request metrics exposed by the NGINX deployment.


Questions # 16:

Your team deploys applications to three Google Kubernetes Engine (GKE) environments development staging and production You use GitHub reposrtones as your source of truth You need to ensure that the three environments are consistent You want to follow Google-recommended practices to enforce and install network policies and a logging DaemonSet on all the GKE clusters in those environments What should you do?

Options:

A.

Use Google Cloud Deploy to deploy the network policies and the DaemonSet Use Cloud Monitoring to trigger an alert if the network policies and DaemonSet drift from your source in the repository.


B.

Use Google Cloud Deploy to deploy the DaemonSet and use Policy Controller to configure the network policies Use Cloud Monitoring to detect drifts from the source in the repository and Cloud Functions tocorrect the drifts


C.

Use Cloud Build to render and deploy the network policies and the DaemonSet Set up Config Sync to sync the configurations for the three environments


D.

Use Cloud Build to render and deploy the network policies and the DaemonSet Set up a Policy Controller to enforce the configurations for the three environments


Questions # 17:

You are writing a postmortem for an incident that severely affected users. You want to prevent similar incidents in the future. Which two of the following sections should you include in the postmortem? (Choose two.)

Options:

A.

An explanation of the root cause of the incident


B.

A list of employees responsible for causing the incident


C.

A list of action items to prevent a recurrence of the incident


D.

Your opinion of the incident’s severity compared to past incidents


E.

Copies of the design documents for all the services impacted by the incident


Questions # 18:

You are using Jenkins to orchestrate your CI/CD pipelines deploying your applications to GKE and on-premises VMs. After expanding your on-premises infrastructure, all resource utilization is normal, but you notice that initiating on-premises deployments is taking longer than expected. You need to accelerate the initiation of on-premises deployments. What should you do?

Options:

A.

Implement a blue-green deployment strategy for your on-premises VMs.


B.

Upgrade your on-premises VMs to have faster CPUs and more memory.


C.

Increase the number of Jenkins executor threads.


D.

Configure Jenkins to use an artifact repository in your on-premises environment.


Questions # 19:

Your organization has a containerized web application that runs on-premises As part of the migration plan to Google Cloud you need to select a deployment strategy and platform that meets the following acceptance criteria

1 The platform must be able to direct traffic from Android devices to an Android-specific microservice

2 The platform must allow for arbitrary percentage-based traffic splitting

3 The deployment strategy must allow for continuous testing of multiple versions of any microservice

What should you do?

Options:

A.

Deploy the canary release of the application to Cloud Run Use traffic splitting to direct 10% of user traffic to the canary release based on the revision tag


B.

Deploy the canary release of the application to App Engine Use traffic splitting to direct a subset of user traffic to the new version based on the IP address


C.

Deploy the canary release of the application to Compute Engine Use Anthos Service Mesh with Compute Engine to direct 10% of user traffic to the canary release by configuring the virtual service.


D.

Deploy the canary release to Google Kubernetes Engine with Anthos Sen/ice Mesh Use traffic splitting to direct 10% of user traffic to the new version based on the user-agent header configured in the virtual service


Questions # 20:

You are deploying a Cloud Build job that deploys Terraform code when a Git branch is updated. While testing, you noticed that the job fails. You see the following error in the build logs:

Initializing the backend. ..

Error: Failed to get existing workspaces : querying Cloud Storage failed: googleapi : Error

403

You need to resolve the issue by following Google-recommended practices. What should you do?

Options:

A.

Change the Terraform code to use local state.


B.

Create a storage bucket with the name specified in the Terraform configuration.


C.

Grant the roles/ owner Identity and Access Management (IAM) role to the Cloud Build service account on the project.


D.

Grant the roles/ storage. objectAdmin Identity and Access Management (IAM) role to the Cloud Build service account on the state file bucket.


Viewing page 2 out of 6 pages
Viewing questions 11-20 out of questions