New Year Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: simple70

Pass the Amazon Web Services AWS Certified Professional SAP-C02 Questions and answers with CertsForce

Viewing page 3 out of 12 pages
Viewing questions 31-45 out of questions
Questions # 31:

Question:

How should EC2 instances in AWS synchronize their clocks with an on-premisesatomic clock NTP server, with theleast administrative overhead?

Options:

A.

Configure a DHCP options set with the on-prem NTP server.


B.

Use a custom AMI with Amazon Time Sync.


C.

Deploy a 3rd-party NTP server from Marketplace.


D.

Create an IPsec VPN tunnel to sync over Direct Connect.


Expert Solution
Questions # 32:

A company runs an intranet application on premises. The company wants to configure a cloud backup of the application. The company has selected AWS Elastic Disaster Recovery for this solution.

The company requires that replication traffic does not travel through the public internet. The application also must not be accessible from the internet. The company does not want this solution to consume all available network bandwidth because other applications require bandwidth.

Which combination of steps will meet these requirements? (Select THREE.)

Options:

A.

Create a VPC that has at least two private subnets, two NAT gateways, and a virtual private gateway.


B.

Create a VPC that has at least two public subnets, a virtual private gateway, and an internet gateway.


C.

Create an AWS Site-to-Site VPN connection between the on-premises network and the target AWS network.


D.

Create an AWS Direct Connect connection and a Direct Connect gateway between the on-premises network and the target AWS network.


E.

During configuration of the replication servers, select the option to use private IP addresses for data replication.


F.

During configuration of the launch settings for the target servers, select the option to ensure that the Recovery instance's private IP address matches the source server's private IP address.


Expert Solution
Questions # 33:

Question:

A company needs to copy backups of 40 RDS for MySQL databases from a production account to a central backup account within AWS Organizations. The databases usedefault AWS-managed KMS encryption keys. The backups must be stored in aWORM (Write Once Read Many)backup account.

What is the correct approach to enable cross-account backup?

Options:

A.

Restore the databases with customer-managed KMS keys and use AWS Backup with cross-account vault sharing.


B.

Share the default KMS keys with the central account and create backup vaults in the central account.


C.

Use a Lambda function to decrypt and copy the snapshots to the central account.


D.

Use a Lambda function to share and re-encrypt snapshots across accounts using the default KMS key.


Expert Solution
Questions # 34:

A solutions architect wants to cost-optimize and appropriately size Amazon EC2 instances in a single AWS account. The solutions architect wants to ensure that the instances are optimized based on CPU, memory, and network metrics.

Which combination of steps should the solutions architect take to meet these requirements? (Choose two.)

Options:

A.

Purchase AWS Business Support or AWS Enterprise Support for the account.


B.

Turn on AWS Trusted Advisor and review any “Low Utilization Amazon EC2 Instances” recommendations.


C.

Install the Amazon CloudWatch agent and configure memory metric collection on the EC2 instances.


D.

Configure AWS Compute Optimizer in the AWS account to receive findings and optimization recommendations.


E.

Create an EC2 Instance Savings Plan for the AWS Regions, instance families, and operating systems of interest.


Expert Solution
Questions # 35:

A company wants to design a disaster recovery (DR) solution for an application that runs in the company's data center. The application writes to an SMB file share and creates a copy on a second file share. Both file shares are in the data center. The application uses two types of files: metadata files and image files.

The company wants to store the copy on AWS. The company needs the ability to use SMB to access the data from either the data center or AWS if a disaster occurs. The copy of the data is rarely accessed but must be available within 5 minutes.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.

Deploy AWS Outposts with Amazon S3 storage. Configure a Windows Amazon EC2 instance on Outposts as a file server.


B.

Deploy an Amazon FSx File Gateway. Configure an Amazon FSx for Windows File Server Multi-AZ file system that uses SSD storage.


C.

Deploy an Amazon S3 File Gateway. Configure the S3 File Gateway to use Amazon S3 Standard-Infrequent Access (S3 Standard-IA) for the metadata files and to use S3 Glacier Deep Archive for the image files.


D.

Deploy an Amazon S3 File Gateway. Configure the S3 File Gateway to use Amazon S3 Standard-Infrequent Access (S3 Standard-IA) for the metadata files and image files.


Expert Solution
Questions # 36:

A company has millions of objects in an Amazon S3 bucket. The objects are in the S3 Standard storage class. All the S3 objects are accessed frequently. The number of users and applications that access the objects is increasing rapidly. The objects are encrypted with server-side encryption with AWS KMS Keys (SSE-KMS).

A solutions architect reviews the company's monthly AWS invoice and notices that AWS KMS costs are increasing because of the high number of requests from Amazon S3. The solutions architect needs to optimize costs with minimal changes to the application.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Create a new S3 bucket that has server-side encryption with customer-provided keys (SSE-C) as the encryption type. Copy the existing objects to the new S3 bucket. Specify SSE-C.


B.

Create a new S3 bucket that has server-side encryption with Amazon S3 managed keys (SSE-S3) as the encryption type. Use S3 Batch Operations to copy the existing objects to the new S3 bucket. Specify SSE-S3.


C.

Use AWS CloudHSM to store the encryption keys. Create a new S3 bucket. Use S3 Batch Operations to copy the existing objects to the new S3 bucket. Encrypt the objects by using the keys from CloudHSM.


D.

Use the S3 Intelligent-Tiering storage class for the S3 bucket. Create an S3 Intelligent-Tiering archive configuration to transition objects that are not accessed for 90 days to S3 Glacier Deep Archive.


Expert Solution
Questions # 37:

A company wants to send data from its on-premises systems to Amazon S3 buckets. The company created the S3 buckets in three different accounts. The company must send the data privately without the data traveling across the internet The company has no existing dedicated connectivity to AWS

Which combination of steps should a solutions architect take to meet these requirements? (Select TWO.)

Options:

A.

Establish a networking account in the AWS Cloud Create a private VPC in the networking account. Set up an AWS Direct Connect connection with a private VIF between the on-premises environment and the private VPC.


B.

Establish a networking account in the AWS Cloud Create a private VPC in the networking account. Set up an AWS Direct Connect connection with a public VlF between the on-premises environment and the private VPC.


C.

Create an Amazon S3 interface endpoint in the networking account.


D.

Create an Amazon S3 gateway endpoint in the networking account.


E.

Establish a networking account in the AWS Cloud Create a private VPC in the networking account. Peer VPCs from the accounts that host the S3 buckets with the VPC in the network account.


Expert Solution
Questions # 38:

A company has a web application that securely uploads pictures and videos to an Amazon S3 bucket. The company requires that only authenticated users are allowed to post content. The application generates a presigned URL that is used to upload objects through a browser interface. Most users are reporting slow upload times for objects larger than 100 MB.

What can a Solutions Architect do to improve the performance of these uploads while ensuring only authenticated users are allowed to post content?

Options:

A.

Set up an Amazon API Gateway with an edge-optimized API endpoint that has a resource as an S3 service proxy. Configure the PUT method for this resource to expose the S3 PutObject operation. Secure the API Gateway using a COGNITO_USER_POOLS authorizer. Have the browser interface use API Gateway instead of the presigned URL to upload objects.


B.

Set up an Amazon API Gateway with a regional API endpoint that has a resource as an S3 service proxy. Configure the PUT method for this resource to expose the S3 PutObject operation. Secure the API Gateway using an AWS Lambda authorizer. Have the browser interface use API Gateway instead of the presigned URL to upload API objects.


C.

Enable an S3 Transfer Acceleration endpoint on the S3 bucket. Use the endpoint when generating the presigned URL. Have the browser interface upload the objects to this URL using the S3 multipart upload API.


D.

Configure an Amazon CloudFront distribution for the destination S3 bucket. Enable PUT and POST methods for the CloudFront cache behavior. Update the CloudFront origin to use an origin access identity (OAI). Give the OAI user s3:PutObject permissions in the bucket policy. Have the browser interface upload objects using the CloudFront distribution


Expert Solution
Questions # 39:

A company is building an application on Amazon EMR to analyze data. The following user groups need to perform different actions:

• Administrator: Provision EMR clusters from different configurations.

• Data engineer: Create an EMR cluster from a small set of available configurations. Run ETL scripts to process data.

• Data analyst: Create an EMR cluster with a specific configuration. Run SQL queries and Apache Hive queries on the data.

A solutions architect must design a solution that gives each group the ability to launch only its authorized EMR configurations. The solution must provide the groups with least privilege access to only the resources that they need. The solution also must provide tagging for all resources that the groups create.

Which solution will meet these requirements?

Options:

A.

Configure AWS Service Catalog to control the Amazon EMR versions available for deployment, the cluster configurations, and the permissions for each user group.


B.

Configure Kerberos-based authentication for EMR clusters when the EMR clusters launch. Specify a Kerberos security configuration and cluster-specific Kerberos options.


C.

Create IAM roles for each user group. Attach policies to the roles to define allowed actions for users. Create an AWS Config rule to check for noncompliant resources. Configure the rule to notify the company to address noncompliant resources.


D.

Use AWS CloudFormation to launch EMR clusters with attached resource policies. Create an AWS Config rule to check for noncompliant resources. Configure the rule to notify the company to address noncompliant resources.


Expert Solution
Questions # 40:

Question:

How can a company patch EC2 instanceswithout internet access, using apatch source in another account, while accessing Systems Manager and S3?

Options:

A.

Custom VPN servers


B.

Transit Gateway + private VIFs


C.

VPC endpoints+VPC peeringwith patch source


D.

Network ACLs + Transit Gateway


Expert Solution
Questions # 41:

A company hosts a data-processing application on Amazon EC2 instances. The application polls an Amazon Elastic File System (Amazon EFS) file system for newly uploaded files. When a new file is detected, the application extracts data from the file and runs logic to select a Docker container image to process the file. The application starts the appropriate container image and passes the file location as a parameter.

The data processing that the container performs can take up to 2 hours. When the processing is complete, the code that runs inside the container writes the file back to Amazon EFS and exits.

The company needs to refactor the application to eliminate the EC2 instances that are running the containers

Which solution will meet these requirements?

Options:

A.

Create an Amazon Elastic Container Service (Amazon ECS) cluster. Configure the processing to run as AWS Fargate tasks. Extract the container selection logic to run as an Amazon EventBridge rule that starts the appropriate Fargate task. Configure the EventBridge rule to run when files are added to the EFS file system.


B.

Create an Amazon Elastic Container Service (Amazon ECS) cluster. Configure the processing to run as AWS Fargate tasks. Update and containerize the container selection logic to run as a Fargate service that starts the appropriate Fargate task. Configure an EFS event notification to invoke the Fargate service when files are added to the EFS file system.


C.

Create an Amazon Elastic Container Service (Amazon ECS) cluster. Configure the processing to run as AWS Fargate tasks. Extract the container selection logic to run as an AWS Lambda function that starts the appropriate Fargate task. Migrate the storage of file uploads to an Amazon S3 bucket. Update theprocessing code to use Amazon S3. Configure an S3 event notification to invoke the Lambda function when objects are created.


D.

Create AWS Lambda container images for the processing. Configure Lambda functions to use the container images. Extract the container selection logic torun as a decision Lambda function that invokes the appropriate Lambda processing function. Migrate the storage of file uploads to an Amazon S3 bucket.Update the processing code to use Amazon S3. Configure an S3 event notification to invoke the decision Lambda function when objects are created


Expert Solution
Questions # 42:

A company uses a service to collect metadata from applications that the company hosts on premises. Consumer devices such as TVs and internet radios access the applications. Many older devices do not support certain HTTP headers and exhibit errors when these headers are present in responses. The company has configured an on-premises load balancer to remove the unsupported headers from responses sent to older devices, which the company identified by the User-Agent headers.

The company wants to migrate the service to AWS, adopt serverless technologies, and retain the ability to support the older devices. The company has already migrated the applications into a set of AWS Lambda functions.

Which solution will meet these requirements?

Options:

A.

Create an Amazon CloudFront distribution for the metadata service. Create an Application Load Balancer (ALB). Configure the CloudFront distribution to forward requests to the ALB. Configure the ALB to invoke the correct Lambda function for each type of request. Create a CloudFront function to remove the problematic headers based on the value of the User-Agent header.


B.

Create an Amazon API Gateway REST API for the metadata service. Configure API Gateway to invoke the correct Lambda function for each type of request. Modify the default gateway responses to remove the problematic headers based on the value of the User-Agent header.


C.

Create an Amazon API Gateway HTTP API for the metadata service. Configure API Gateway to invoke the correct Lambda function for each type of request. Create a response mapping template to remove the problematic headers based on the value of the User-Agent. Associate the response data mapping with the HTTP API.


D.

Create an Amazon CloudFront distribution for the metadata service. Create an Application Load Balancer (ALB). Configure the CloudFront distribution to forward requests to the ALB. Configure the ALB to invoke the correct Lambda function for each type of request. Create a Lambda@Edge function that will remove the problematic headers in response to viewer requests based on the value of the User-Agent header.


Expert Solution
Questions # 43:

A company needs to use an AWS Transfer Family SFTP-enabled server with an Amazon S3 bucket to receive updates from a third-party data supplier. The data is encrypted with Pretty Good Privacy (PGP) encryption The company needs a solution that will automatically decrypt the data after the company receives the data

A solutions architect will use a Transfer Family managed workflow The company has created an 1AM service role by using an 1AM policy that allows access to AWS Secrets Manager and the S3 bucket The role's trust relationship allows the transfer amazonaws com service to assume the rote

What should the solutions architect do next to complete the solution for automatic decryption'?

Options:

A.

Store the PGP public key in Secrets Manager Add a nominal step in the Transfer Family managed workflow to decrypt files Configure PGP encryption parameters in the nominal step Associate the workflow with the Transfer Family server


B.

Store the PGP private key in Secrets Manager Add an exception-handling step in the Transfer Family managed workflow to decrypt files Configure PGP encryption parameters in the exception handler Associate the workflow with the SFTP user


C.

Store the PGP private key in Secrets Manager Add a nominal step in the Transfer Family managed workflow to decrypt files. Configure PGP decryption parameters in the nominal step Associate the workflow with the Transfer Family server


D.

Store the PGP public key in Secrets Manager Add an exception-handling step in the TransferFamily managed workflow to decrypt files Configure PGP decryption parameters in the exception handler Associate the workflow with the SFTP user


Expert Solution
Questions # 44:

A solutions architect works for a government agency that has strict disaster recovery requirements. All Amazon Elastic Block Store (Amazon EBS) snapshots are required to be saved in at least two additional AWS Regions. The agency also is required to maintain the lowest possible operational overhead.

Which solution meets these requirements?

Options:

A.

Configure a policy in Amazon Data Lifecycle Manager (Amazon DLM) to run once daily to copy the EBS snapshots to the additional Regions.


B.

Use Amazon EventBridge (Amazon CloudWatch Events) to schedule an AWS Lambda function to copy the EBS snapshots to the additional Regions.


C.

Set up AWS Backup to create the EBS snapshots. Configure Amazon S3 cross-Region replication to copy the EBS snapshots to the additional Regions.


D.

Schedule Amazon EC2 Image Builder to run once daily to create an AMI and copy the AMI to the additional Regions


Expert Solution
Questions # 45:

A company has developed a mobile game. The backend for the game runs on several virtual machines located in an on-premises data center. The business logic is exposed using a REST API with multiple functions. Player session data is stored in central file storage. Backend services use different API keys for throttling and to distinguish between live and test traffic.

The load on the game backend varies throughout the day. During peak hours, the server capacity is not sufficient. There are also latency issues when fetching player session data. Management has asked a solutions architect to present a cloud architecture that can handle the game's varying load and provide low-latency data access. The API model should not be changed.

Which solution meets these requirements?

Options:

A.

Implement the REST API using a Network Load Balancer (NLB). Run the business logic on an Amazon EC2 instance behind the NLB. Store player session data in Amazon Aurora Serverless.


B.

Implement the REST API using an Application Load Balancer (ALB). Run the business logic in AWS Lambda. Store player session data in Amazon DynamoDB with on-demand capacity.


C.

Implement the REST API using Amazon API Gateway. Run the business logic in AWS Lambda. Store player session data in Amazon DynamoDB with on- demand capacity.


D.

Implement the REST API using AWS AppSync. Run the business logic in AWS Lambda. Store player session data in Amazon Aurora Serverless.


Expert Solution
Viewing page 3 out of 12 pages
Viewing questions 31-45 out of questions