Pre-Summer Special Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: force70

Pass the Amazon Web Services AWS Certified Professional SAP-C02 Questions and answers with CertsForce

Viewing page 2 out of 13 pages
Viewing questions 16-30 out of questions
Questions # 16:

A company uses multiple software as a service SaaS applications for messaging, email, and file sharing. The SaaS applications are compatible with AWS AppFabric. The company’s web application runs in a VPC on an Amazon EKS cluster and uses Amazon S3 to store data.

The company wants to detect security incidents across the SaaS applications and the web application that could compromise company data. The company needs a centralized solution that provides a dashboard. The dashboard must show the IP addresses, email addresses, and access frequencies of unique users across its SaaS applications and the web application.

Which combination of steps will meet these requirements with the LEAST operational overhead? Select THREE.

Options:

A.

Ingest audit log data from each SaaS application into AWS AppFabric. Convert the audit log data into Open Cybersecurity Schema Framework OCSF normalized Apache Parquet format. Send the logs to Amazon Data Firehose to be delivered to an Amazon Security Lake S3 bucket.


B.

Ingest networking and usage log data from each SaaS application into AWS AppFabric. Convert the networking and usage log data into JSON format. Send the logs to Amazon Data Firehose to be delivered to Amazon OpenSearch Service.


C.

Create an Amazon S3 bucket to receive logs in JSON format through Amazon Data Firehose. Create a dashboard in Amazon CloudWatch. Configure the dashboard to visualize the location of the IP addresses, email addresses, and access frequencies of unique users by using data from the S3 bucket.


D.

Configure the logs associated with AWS CloudTrail management events, AWS CloudTrail data events for Amazon S3, Amazon EKS audit logs, and VPC Flow Logs as sources in Amazon Security Lake. Add AWS AppFabric as a custom source in Security Lake.


E.

Configure Amazon Security Lake to send security data from different sources to Amazon Redshift. Use Amazon QuickSight to create a visualization of the security data.


F.

Configure Amazon Security Lake to send security data from different sources to Amazon OpenSearch Service by using OpenSearch Ingestion. Use the OpenSearch Service dashboard to create a visualization of the security data.


Expert Solution
Questions # 17:

A digital marketing company has multiple AWS accounts that belong to various teams. The creative team uses an Amazon S3 bucket in its AWS account to securely store images and media files that are used as content for the company ' s marketing campaigns. The creative team wants to share the S3 bucket with the strategy team so that the strategy team can view the objects.

A solutions architect has created an IAM role that is named strategy_reviewer in the Strategy account. The solutions architect also has set up a custom AWS Key Management Service (AWS KMS) key in the Creative account and has associated the key with the S3 bucket. However, when users from the Strategy account assume the IAM role and try to access objects in the S3 bucket, they receive an Account.

The solutions architect must ensure that users in the Strategy account can access the S3 bucket. The solution must provide these users with only the minimum permissions that they need.

Which combination of steps should the solutions architect take to meet these requirements? (Select THREE.)

Options:

A.

Create a bucket policy that includes read permissions for the S3 bucket. Set the principal ofthe bucket policy to the account ID of the Strategy account


B.

Update the strategy_reviewer IAM role to grant full permissions for the S3 bucket and to grant decrypt permissions for the custom KMS key.


C.

Update the custom KMS key policy in the Creative account to grant decrypt permissions to the strategy_reviewer IAM role.


D.

Create a bucket policy that includes read permissions for the S3 bucket. Set the principal of the bucket policy to an anonymous user.


E.

Update the custom KMS key policy in the Creative account to grant encrypt permissions to the strategy_reviewer IAM role.


F.

Update the strategy_reviewer IAM role to grant read permissions for the S3 bucket and to grant decrypt permissions for the custom KMS key


Expert Solution
Questions # 18:

A company is deploying a third-party firewall appliance solution from AWS Marketplace to monitor and protect traffic that leaves the company ' s AWS environments. The company wants to deploy this appliance into a shared services VPC and route all outbound internet-bound traffic through the appliances.

A solutions architect needs to recommend a deployment method that prioritizes reliability and minimizes failover time between firewall appliances within a single AWS Region. The company has set up routing from the shared services VPC to other VPCs.

Which steps should the solutions architect recommend to meet these requirements? (Select THREE.)

Options:

A.

Deploy two firewall appliances into the shared services VPC, each in a separate Availability Zone.


B.

Create a new Network Load Balancer in the shared services VPC. Create a new target group, and attach it to the new Network Load Balancer. Add each of the firewall appliance instances to the target group.


C.

Create a new Gateway Load Balancer in the shared services VPC. Create a new target group, and attach it to the new Gateway Load Balancer. Add each of the firewall appliance instances to the target group.


D.

Create a VPC interface endpoint. Add a route to the route table in the shared services VPC. Designate the new endpoint as the next hop for traffic that enters the shared services VPC from other VPCs.


E.

Deploy two firewall appliances into the shared services VPC. each in the same Availability Zone.


F.

Create a VPC Gateway Load Balancer endpoint. Add a route to the route table in the shared services VPC. Designate the new endpoint as the next hop for traffic that enters the shared services VPC from other VPCs.


Expert Solution
Questions # 19:

A company ' s AWS environment includes an Amazon RDS for MySQL database in a Multi-AZ deployment and an Amazon EC2 Auto Scaling group behind an Application Load Balancer (ALB). The Auto Scaling group spans two Availability Zones. The company also uses Amazon Route 53 for DNS hosting.

The company runs an application in its AWS environment. More than 95% of the application ' s operations are read operations. A solutions architect needs to deploy the workload to a second AWS Region. The solution must reduce application latency while maintaining business continuity.

What combination of solutions will meet these requirements? (Select TWO.)

Options:

A.

Migrate the RDS for MySQL database to an Amazon Aurora MySQL global database. Create an ALB in the new Region. Deploy a new EC2 Auto Scaling group behind the new ALB.


B.

Migrate the RDS for MySQL database to a Multi-AZ deployment in a new Region. Create an ALB in the new Region. Deploy an Amazon CloudFront distribution in front of the new ALB.


C.

Configure latency-based routing in Route 53. Add a new record that points to both ALBs.


D.

Configure geolocation routing in Route 53. Add a new alias record that points to both ALBs.


E.

Migrate the RDS for MySQL database to Amazon Aurora Serverless v2. Create a new ALB. Deploy an EC2 Auto Scaling group behind the new ALB.


Expert Solution
Questions # 20:

A company needs to optimize the infrastructure for an application that uploads data to Amazon S3. The uploads average 64 KB in size. When the data is uploaded, Amazon S3 sends an event to Amazon EventBridge. EventBridge then invokes an Amazon ECS application task.

The ECS task processes the data and stores the results in an Amazon DynamoDB table. Processing takes an average of 15 minutes. The company must keep the S3 data for 5 years and must keep the DynamoDB data for 15 days.

The application is gaining more users and is handling millions of S3 uploads every hour.

Which set of changes will provide the MOST cost-effective solution for the application?

Options:

A.

Replace the ECS task with an AWS Lambda function for processing. Create S3 Lifecycle rules to move the S3 objects to S3 Intelligent-Tiering after 1 day and to expire the objects after 5 years. Configure DynamoDB Standard-Infrequent Access for the DynamoDB table.


B.

Replace the S3 bucket with Amazon Managed Streaming for Apache Kafka (Amazon MSK) to receive the data. Configure tiered storage for data that is older than 1 day. Configure EventBridge to read messages from Amazon MSK in batches of 1,000 messages. Replace the ECS task with an AWS Lambda function for processing. Configure a TTL of 15 days on the DynamoDB table.


C.

Create an Amazon Data Firehose stream to receive the data. Configure buffering to deliver messages every minute to Amazon S3 in gzip format. Purchase a Compute Savings Plan based on usage recommendations. Create S3 Lifecycle rules to move the S3 objects to S3 Glacier Deep Archive after 1 day and to expire the objects after 5 years. Configure a TTL of 15 days on the DynamoDB table.


D.

Purchase a Compute Savings Plan based on usage recommendations. Create S3 Lifecycle rules to move the S3 objects to S3 Glacier Deep Archive after 1 day and to expire the objects after 5 years. Configure DynamoDB Standard-Infrequent Access for the DynamoDB table.


Expert Solution
Questions # 21:

A company has its cloud infrastructure on AWS A solutions architect needs to define the infrastructure as code. The infrastructure is currently deployed in one AWS Region. The company ' s business expansion plan includes deployments in multiple Regions across multiple AWS accounts

What should the solutions architect do to meet these requirements?

Options:

A.

Use AWS CloudFormation templates Add IAM policies to control the various accounts Deploy the templates across the multiple Regions


B.

Use AWS Organizations Deploy AWS CloudFormation templates from the management account Use AWS Control Tower to manage deployments across accounts


C.

Use AWS Organizations and AWS CloudFormation StackSets Deploy a CloudFormation template from an account that has the necessary IAM permissions


D.

Use nested stacks with AWS CloudFormation templates Change the Region by using nested stacks


Expert Solution
Questions # 22:

A financial services company runs a complex, multi-tier application on Amazon EC2 instances and AWS Lambda functions. The application stores temporary data in Amazon S3. The S3 objects are valid for only 45 minutes and are deleted after 24 hours.

The company deploys each version of the application by launching an AWS CloudFormation stack. The stack creates all resources that are required to run the application. When the company deploys and validates a new application version, the company deletes the CloudFormation stack of the old version.

The company recently tried to delete the CloudFormation stack of an old application version, but the operation failed. An analysis shows that CloudFormation failed to delete an existing S3 bucket. A solutions architect needs to resolve this issue without making major changes to the application ' s architecture.

Which solution meets these requirements?

Options:

A.

Implement a Lambda function that deletes all files from a given S3 bucket. Integrate this Lambda function as a custom resource into the CloudFormation stack. Ensure that the custom resource has a DependsOn attribute that points to the S3 bucket ' s resource.


B.

Modify the CloudFormation template to provision an Amazon Elastic File System (Amazon EFS) file system to store the temporary files there instead of in Amazon S3. Configure the Lambda functions to run in the same VPC as the file system. Mount the file system to the EC2 instances and Lambda functions.


C.

Modify the CloudFormation stack to create an S3 Lifecycle rule that expires all objects 45 minutes after creation. Add a DependsOn attribute that points to the S3 bucket ' s resource.


D.

Modify the CloudFormation stack to attach a DeletionPolicy attribute with a value of Delete to the S3 bucket.


Expert Solution
Questions # 23:

A large company is running a popular web application. The application runs on several Amazon EC2 Linux Instances in an Auto Scaling group in a private subnet. An Application Load Balancer is targeting the Instances In the Auto Scaling group in the private subnet. AWS Systems Manager Session Manager Is configured, and AWS Systems Manager Agent is running on all the EC2 instances.

The company recently released a new version of the application Some EC2 instances are now being marked as unhealthy and are being terminated As a result, the application is running at reduced capacity A solutions architect tries to determine the root cause by analyzing Amazon CloudWatch logs that are collected from the application, but the logs are inconclusive

How should the solutions architect gain access to an EC2 instance to troubleshoot the issue1?

Options:

A.

Suspend the Auto Scaling group ' s HealthCheck scaling process. Use Session Manager to log in to an instance that is marked as unhealthy


B.

Enable EC2 instance termination protection Use Session Manager to log In to an instance that is marked as unhealthy.


C.

Set the termination policy to Oldestinstance on the Auto Scaling group. Use Session Manager to log in to an instance that is marked as unhealthy


D.

Suspend the Auto Scaling group ' s Terminate process. Use Session Manager to log in to an instance that is marked as unhealthy


Expert Solution
Questions # 24:

A company is running an application in the AWS Cloud. Recent application metrics show inconsistent response times and a significant increase in error rates. Calls to third-party services are causing the delays. Currently, the application calls third-party services synchronously by directly invoking an AWS Lambda function.

A solutions architect needs to decouple the third-party service calls and ensure that all the calls are eventually completed.

Which solution will meet these requirements?

Options:

A.

Use an Amazon Simple Queue Service (Amazon SQS) queue to store events and invoke the Lambda function.


B.

Use an AWS Step Functions state machine to pass events to the Lambda function.


C.

Use an Amazon EventBridge rule to pass events to the Lambda function.


D.

Use an Amazon Simple Notification Service (Amazon SNS) topic to store events and Invoke the Lambda function.


Expert Solution
Questions # 25:

A company wants to run a custom network analysis software package to inspect traffic as traffic leaves and enters a VPC. The company has deployed the solution by using AWS Cloud Formation on three Amazon EC2 instances in an Auto Scaling group. All network routing has been established to direct traffic to the EC2 instances.

Whenever the analysis software stops working, the Auto Scaling group replaces an instance. The network routes are not updated when the instance replacement occurs.

Which combination of steps will resolve this issue? {Select THREE.)

Options:

A.

Create alarms based on EC2 status check metrics that will cause the Auto Scaling group to replace the failed instance.


B.

Update the Cloud Formation template to install the Amazon CloudWatch agent on the EC2 instances. Configure the CloudWatch agent to send process metrics for the application.


C.

Update the Cloud Formation template to install AWS Systems Manager Agent on the EC2 instances. Configure Systems Manager Agent to send process metrics for the application.


D.

Create an alarm for the custom metric in Amazon CloudWatch for the failure scenarios. Configure the alarm to publish a message to an Amazon Simple Notification Service {Amazon SNS) topic.


E.

Create an AWS Lambda function that responds to the Amazon Simple Notification Service (Amazon SNS) message to take the instance out of service. Update the network routes to point to the replacement instance.


F.

In the Cloud Formation template, write a condition that updates the network routes when a replacement instance is launched.


Expert Solution
Questions # 26:

A company has hundreds of AWS accounts. The company uses an organization in AWS Organizations to manage all the accounts. The company has turned on all features.

A finance team has allocated a daily budget for AWS costs. The finance team must receive an email notification if the organization ' s AWS costs exceed 80% of the allocated budget. A solutions architect needs to implement a solution to track the costs and deliver the notifications.

Which solution will meet these requirements?

Options:

A.

In the organization ' s management account, use AWS Budgets to create a budget that has a daily period. Add an alert threshold and set the value to 80%. Use Amazon Simple Notification Service (Amazon SNS) to notify the finance team.


B.

In the organization’s management account, set up the organizational view feature for AWS Trusted Advisor. Create an organizational view report for cost optimization.Set an alert threshold of 80%. Configure notification preferences. Add the email addresses of the finance team.


C.

Register the organization with AWS Control Tower. Activate the optional cost control (guardrail). Set a control (guardrail) parameter of 80%. Configure control (guardrail) notification preferences. Use Amazon Simple Notification Service (Amazon SNS) to notify the finance team.


D.

Configure the member accounts to save a daily AWS Cost and Usage Report to an Amazon S3 bucket in the organization ' s management account. Use Amazon EventBridge to schedule a daily Amazon Athena query to calculate the organization’s costs. Configure Athena to send an Amazon CloudWatch alert if the total costs are more than 80% of the allocated budget. Use Amazon Simple Notification Service (Amazon SNS) to notify the finance team.


Expert Solution
Questions # 27:

A company recently completed the migration from an on-premises data center to the AWS Cloud by using a replatforming strategy. One of the migrated servers is running a legacy Simple Mail Transfer Protocol (SMTP) service that a critical application relies upon. The application sends outbound email messages to the company’s customers. The legacy SMTP server does not support TLS encryption and uses TCP port 25. The application can use SMTP only.

The company decides to use Amazon Simple Email Service (Amazon SES) and to decommission the legacy SMTP server. The company has created and validated the SES domain. The company has lifted the SES limits.

What should the company do to modify the application to send email messages from Amazon SES?

Options:

A.

Configure the application to connect to Amazon SES by using TLS Wrapper. Create an IAM role that has ses:SendEmail and ses:SendRawEmail permissions. Attach the IAM role to an Amazon EC2 instance.


B.

Configure the application to connect to Amazon SES by using STARTTLS. Obtain Amazon SES SMTP credentials. Use the credentials to authenticate with Amazon SES.


C.

Configure the application to use the SES API to send email messages. Create an IAM role that has ses:SendEmail and ses:SendRawEmail permissions. Use the IAM role as a service role for Amazon SES.


D.

Configure the application to use AWS SDKs to send email messages. Create an IAM user for Amazon SES. Generate API access keys. Use the access keys to authenticate with Amazon SES.


Expert Solution
Questions # 28:

A company has a project that is launching Amazon EC2 instances that are larger than required. The project ' s account cannot be part of the company ' s organization in AWS Organizations due to policy restrictions to keep this activity outside of corporate IT. The company wants to allow only the launch of t3.small

EC2 instances by developers in the project ' s account. These EC2 instances must be restricted to the us-east-2 Region.

What should a solutions architect do to meet these requirements?

Options:

A.

Create a new developer account. Move all EC2 instances, users, and assets into us-east-2. Add the account to the company ' s organization in AWS Organizations. Enforce a tagging policy that denotes Region affinity.


B.

Create an SCP that denies the launch of all EC2 instances except t3.small EC2 instances in us-east-2. Attach the SCP to the project ' s account.


C.

Create and purchase a t3.small EC2 Reserved Instance for each developer in us-east-2. Assign each developer a specific EC2 instance with their name as the tag.


D.

Create an IAM policy than allows the launch of only t3.small EC2 instances in us-east-2. Attach the policy to the roles and groups that the developers use in the project ' s account.


Expert Solution
Questions # 29:

A company has a web application that allows users to upload short videos. The videos are stored on Amazon EBS volumes and analyzed by custom recognition software for categorization.

The website contains stat c content that has variable traffic with peaks in certain months. The architecture consists of Amazon EC2 instances running in an Auto Scaling group for the web application and EC2 instances running in an Auto Scaling group to process an Amazon SQS queue The company wants to re-architect the application to reduce operational overhead using AWS managed services where possible and remove dependencies on third-party software.

Which solution meets these requirements?

Options:

A.

Use Amazon ECS containers for the web application and Spot Instances for the Auto Scaling group that processes the SQS queue. Replace the custom software with Amazon Recognition to categorize the videos.


B.

Store the uploaded videos n Amazon EFS and mount the file system to the EC2 instances for Te web application. Process the SOS queue with an AWS Lambda function that calls the Amazon Rekognition API to categorize the videos.


C.

Host the web application in Amazon S3. Store the uploaded videos in Amazon S3. Use S3 event notifications to publish events to the SQS queue Process the SQS queue with an AWS Lambda function that calls the Amazon Rekognition API to categorize the videos.


D.

Use AWS Elastic Beanstalk to launch EC2 instances in an Auto Scaling group for the web application and launch a worker environment to process the SQS queue Replace the custom software with Amazon Rekognition to categorize the videos.


Expert Solution
Questions # 30:

A publishing company ' s design team updates the icons and other static assets that an ecommerce web application uses. The company serves the icons and assets from an Amazon S3 bucket that is hosted in the company ' s production account. The company also uses a development account that members of the design team canaccess.

After the design team tests the static assets in the development account, the design team needs to load the assets into the S3 bucket in the production account. A solutions architect must provide the design team with access to the production account without exposing other parts of the web application to the risk of unwanted changes.

Which combination of steps will meet these requirements? (Select THREE.)

Options:

A.

In the production account, create a new IAM policy that allows read and write access to the S3 bucket.


B.

In the development account, create a new IAM policy that allows read and write access to the S3 bucket.


C.

In the production account, create a role. Attach the new policy to the role. Define the development account as a trusted entity.


D.

In the development account, create a role. Attach the new policy to the role. Define the production account as a trusted entity.


E.

In the development account, create a group that contains all the IAM users of the design team. Attach a different IAM policy to the group to allow the sts:AssumeRole action on the role in the production account.


F.

In the development account, create a group that contains all tfje IAM users of the design team. Attach a different IAM policy to the group to allow the sts;AssumeRole action on the role in the development account.


Expert Solution
Viewing page 2 out of 13 pages
Viewing questions 16-30 out of questions