Spring Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: simple70

Pass the Google Google Cloud Certified Professional-Cloud-Architect Questions and answers with CertsForce

Viewing page 5 out of 7 pages
Viewing questions 41-50 out of questions
Questions # 41:

Refer to the Altostrat Media case study for the following solution regarding the performance analysis of their media processing pipeline.

Altostrat needs to analyze the performance of its media processing pipeline running on Java-based Cloud Run function. You need to select the most effective tool for the task. What should you do?

Options:

A.

Query logs in Cloud Logging.


B.

Analyze the data via Cloud Profiler.


C.

Instrument the code to use Cloud Trace.


D.

Inspect data from Snapshot Debugger.


Expert Solution
Questions # 42:

Refer to the Altostrat Media case study for the following solutions regarding cost optimization for batch processing and microservices testing strategies.

Altostrat is experiencing fluctuating computational demands for its batch processing jobs. These jobs are not time-critical and can tolerate occasional interruptions. You want to optimize cloud costs and address batch processing needs. What should you do?

Options:

A.

Configure reserved VM instances


B.

Deploy spot VM instances.


C.

Set up standard VM instances.


D.

Use Cloud Run functions.


Expert Solution
Questions # 43:

Altostrat's development team is using a microservices architecture for their application. You need to select the most suitable testing approach to ensure that individual microservices function correctly in isolation. What should you do?

Options:

A.

Run unit testing.


B.

Use load testing.


C.

Perform end-to-end testing.


D.

Execute integration testing.


Expert Solution
Questions # 44:

Your team plans to use Vertex AI to develop and deploy machine learning models for various use cases for fraud detection, product recommendations, and customer churn prediction. You want to enhance the security posture of the Vertex AI and Workbench environment by restricting data exfiltration. What should you do?

Options:

A.

Create a service perimeter and include ml.googleapis.com and document.googleapis.com as protected services.


B.

Enable VPC Flow Logs to monitor network traffic to and from Vertex AI services and to identify suspicious activity.


C.

Create a service perimeter and include aiplatform.googleapis.com and notebooks.googleapis.com as protected services.


D.

Enable Private Google Access for the VPC network to allow Vertex AI services to access public Google services without traversing the public internet.


Expert Solution
Questions # 45:

For this question, refer to the Dress4Win case study. To be legally compliant during an audit, Dress4Win must be able to give insights in all administrative actions that modify the configuration or metadata of resources on Google Cloud.

What should you do?

Options:

A.

Use Stackdriver Trace to create a trace list analysis.


B.

Use Stackdriver Monitoring to create a dashboard on the project’s activity.


C.

Enable Cloud Identity-Aware Proxy in all projects, and add the group of Administrators as a member.


D.

Use the Activity page in the GCP Console and Stackdriver Logging to provide the required insight.


Expert Solution
Questions # 46:

You have deployed an application on Anthos clusters (formerly Anthos GKE). According to the SRE practices at your company you need to be alerted if the request latency is above a certain threshold for a specified amount of time. What should you do?

Options:

A.

Enable the Cloud Trace API on your project and use Cloud Monitoring Alerts to send an alert based on the Cloud Trace metrics


B.

Configure Anthos Config Management on your cluster and create a yaml file that defines the SLO and alerting policy you want to deploy in your cluster


C.

Use Cloud Profiler to follow up the request latency. Create a custom metric in Cloud Monitoring based on the results of Cloud Profiler, and create an Alerting Policy in case this metric exceeds the threshold


D.

Install Anthos Service Mesh on your cluster. Use the Google Cloud Console to define a Service Level Objective (SLO)


Expert Solution
Questions # 47:

For this question, refer to the Dress4Win case study. Considering the given business requirements, how would you automate the deployment of web and transactional data layers?

Options:

A.

Deploy Nginx and Tomcat using Cloud Deployment Manager to Compute Engine. Deploy a Cloud SQL server to replace MySQL. Deploy Jenkins using Cloud Deployment Manager.


B.

Deploy Nginx and Tomcat using Cloud Launcher. Deploy a MySQL server using Cloud Launcher. Deploy Jenkins to Compute Engine using Cloud Deployment Manager scripts.


C.

Migrate Nginx and Tomcat to App Engine. Deploy a Cloud Datastore server to replace the MySQL server in a high-availability configuration. Deploy Jenkins to Compute Engine using Cloud Launcher.


D.

Migrate Nginx and Tomcat to App Engine. Deploy a MySQL server using Cloud Launcher. Deploy Jenkins to Compute Engine using Cloud Launcher.


Expert Solution
Questions # 48:

For this question, refer to the Dress4Win case study. You want to ensure that your on-premises architecture meets business requirements before you migrate your solution.

What change in the on-premises architecture should you make?

Options:

A.

Replace RabbitMQ with Google Pub/Sub.


B.

Downgrade MySQL to v5.7, which is supported by Cloud SQL for MySQL.


C.

Resize compute resources to match predefined Compute Engine machine types.


D.

Containerize the micro services and host them in Google Kubernetes Engine.


Expert Solution
Questions # 49:

For this question, refer to the Dress4Win case study. Dress4Win is expected to grow to 10 times its size in 1 year with a corresponding growth in data and traffic that mirrors the existing patterns of usage. The CIO has set the target of migrating production infrastructure to the cloud within the next 6 months. How will you configure the solution to scale for this growth without making major application changes and still maximize the ROI?

Options:

A.

Migrate the web application layer to App Engine, and MySQL to Cloud Datastore, and NAS to Cloud Storage. Deploy RabbitMQ, and deploy Hadoop servers using Deployment Manager.


B.

Migrate RabbitMQ to Cloud Pub/Sub, Hadoop to BigQuery, and NAS to Compute Engine with Persistent Disk storage. Deploy Tomcat, and deploy Nginx using Deployment Manager.


C.

Implement managed instance groups for Tomcat and Nginx. Migrate MySQL to Cloud SQL, RabbitMQ to Cloud Pub/Sub, Hadoop to Cloud Dataproc, and NAS to Compute Engine with Persistent Disk storage.


D.

Implement managed instance groups for the Tomcat and Nginx. Migrate MySQL to Cloud SQL, RabbitMQ to Cloud Pub/Sub, Hadoop to Cloud Dataproc, and NAS to Cloud Storage.


Expert Solution
Questions # 50:

For this question, refer to the Dress4Win case study. Which of the compute services should be migrated as –is and would still be an optimized architecture for performance in the cloud?

Options:

A.

Web applications deployed using App Engine standard environment


B.

RabbitMQ deployed using an unmanaged instance group


C.

Hadoop/Spark deployed using Cloud Dataproc Regional in High Availability mode


D.

Jenkins, monitoring, bastion hosts, security scanners services deployed on custom machine types


Expert Solution
Viewing page 5 out of 7 pages
Viewing questions 41-50 out of questions