Pre-Summer Special Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: force70

Pass the Amazon Web Services AWS Certified Professional SAP-C02 Questions and answers with CertsForce

Viewing page 1 out of 13 pages
Viewing questions 1-15 out of questions
Questions # 1:

A company is running a compute workload by using Amazon EC2 Spot Instances in an Auto Scaling group. The launch template uses two placement groups and one instance type.

Recently, a monitoring system reported Auto Scaling instance launch failures that correlated with longer wait times for system users. The company needs to improve the overall reliability of the workload.

Which solution will meet these requirements?

Options:

A.

Create a launch configuration that uses attribute-based instance type selection. Configure the Auto Scaling group to use the new launch configuration.


B.

Create a launch configuration that uses a larger instance type. Configure the Auto Scaling group to use the launch configuration and the launch template.


C.

Create a new launch template version that increases the number of placement groups to 3. Configure the Auto Scaling group to use the new launch template version.


D.

Create a new launch template version that uses attribute-based instance type selection. Configure the Auto Scaling group to use the new launch template version.


Expert Solution
Questions # 2:

A company is using AWS CodePipeline for the CI/CD of an application to an Amazon EC2 Auto Scaling group. All AWS resources are defined in AWS

CloudFormation templates. The application artifacts are stored in an Amazon S3 bucket and deployed to the Auto Scaling group using instance user data scripts.

As the application has become more complex, recent resource changes in the CloudFormation templates have caused unplanned downtime.

How should a solutions architect improve the CI/CD pipeline to reduce the likelihood that changes in the templates will cause downtime?

Options:

A.

Adapt the deployment scripts to detect and report CloudFormation error conditions when performing deployments. Write test plans for a testing team to execute in a non-production environment before approving the change for production.


B.

Implement automated testing using AWS CodeBuild in a test environment. Use CloudFormation change sets to evaluate changes before deployment. Use AWS CodeDeploy to leverage blue/green deployment patterns to allow evaluations and the ability to revert changes, if needed.


C.

Use plugins for the integrated development environment (IDE) to check the templates for errors, and use the AWS CLI to validate that the templates are correct. Adapt the deployment code to check for error conditions and generate notifications on errors. Deploy to a test environment and execute a manual test plan before approving the change for production.


D.

Use AWS CodeDeploy and a blue/green deployment pattern with CloudFormation to replace the user data deployment scripts. Have the operators log in to running instances and go through a manual test plan to verify the application is running as expected.


Expert Solution
Questions # 3:

A company is using an on-premises Active Directory service for user authentication. The company wants to use the same authentication service to sign in to the company ' s AWS accounts, which are using AWS Organizations. AWS Site-to-Site VPN connectivity already exists between the on-premises environment and all the company ' s AWS accounts.

The company ' s security policy requires conditional access to the accounts based on user groups and roles. User identities must be managed in a single location.

Which solution will meet these requirements?

Options:

A.

Configure AWS Single Sign-On (AWS SSO) to connect to Active Directory by using SAML 2.0. Enable automatic provisioning by using the System for Cross- domain Identity Management (SCIM) v2.0 protocol. Grant access to the AWS accounts by using attribute-based access controls (ABACs).


B.

Configure AWS Single Sign-On (AWS SSO) by using AWS SSO as an identity source. Enable automatic provisioning by using the System for Cross-domain Identity Management (SCIM) v2.0 protocol. Grant access to the AWS accounts by using AWS SSO permission sets.


C.

In one of the company ' s AWS accounts, configure AWS Identity and Access Management (IAM) to use a SAML 2.0 identity provider. Provision IAM users that are mapped to the federated users. Grant access that corresponds to appropriate groups in Active Directory. Grant access to the required AWS accounts by using cross-account IAM users.


D.

In one of the company ' s AWS accounts, configure AWS Identity and Access Management (IAM) to use an OpenID Connect (OIDC) identity provider. Provision IAM roles that grant access to the AWS account for the federated users that correspond to appropriate groups in Active Directory. Grant access to the required AWS accounts by using cross-account IAM roles.


Expert Solution
Questions # 4:

A solutions architect is preparing to deploy a new security tool into several previously unused AWS Regions. The solutions architect will deploy the tool by using an AWS CloudFormation stack set. The stack set ' s template contains an 1AM role that has a custom name. Upon creation of the stack set. no stack instances are created successfully.

What should the solutions architect do to deploy the stacks successfully?

Options:

A.

Enable the new Regions in all relevant accounts. Specify the CAPABILITY_NAMED_IAM capability during the creation of the stack set.


B.

Use the Service Quotas console to request a quota increase for the number of CloudFormation stacks in each new Region in all relevant accounts. Specify the CAPABILITYJAM capability during the creation of the stack set.


C.

Specify the CAPABILITY_NAMED_IAM capability and the SELF_MANAGED permissions model during the creation of the stack set.


D.

Specify an administration role ARN and the CAPABILITYJAM capability during the creation of the stack set.


Expert Solution
Questions # 5:

A company wants to migrate its website from its on-premises data center to AWS. The website service APIs are hosted on Docker containers in a self-managed container platform. To meet compliance requirements, the company has installed proprietary security agent software on each container platform node. MySQL databases are installed on two separate VMs in a source-replica setup.

A solutions architect must design a solution to migrate the entire website environment to AWS.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Deploy website services as containers on Amazon EC2 instances. For the EC2 instances, use a custom AMI that has the proprietary security agent software and container management software pre-installed. Migrate the MySQL database data to an Amazon Aurora DB cluster by using AWS DMS. Add an additional read replica to the Aurora DB cluster.


B.

Create an Amazon EKS cluster that is configured with a managed node group. Use a custom AMI that has the proprietary security agent software pre-installed. Deploy website services as Kubernetes pods. Migrate the MySQL database data to an Amazon RDS for MySQL DB instance by using AWS DMS. Configure a Multi-AZ deployment for the RDS DB instance.


C.

Deploy website services as containers on AWS Lambda functions. Create an Amazon API Gateway API to serve incoming requests to Lambda functions on the backend. Save static content in an Amazon S3 bucket. Migrate the MySQL database data to an Amazon RDS for MySQL DB instance by using AWS DMS. Configure a Multi-AZ deployment for the RDS DB instance.


D.

Create an Amazon ECS cluster that uses the Fargate launch type. Configure Fargate with a custom AMI that has the proprietary security agent software pre-installed. Deploy website services as ECS tasks. Migrate the MySQL database data to an Amazon Aurora DB cluster by using AWS DMS. Add an additional read replica to the Aurora DB cluster.


Expert Solution
Questions # 6:

A company has multiple lines of business (LOBs) that toll up to the parent company. The company has asked its solutions architect to develop a solution with the following requirements

• Produce a single AWS invoice for all of the AWS accounts used by its LOBs.

• The costs for each LOB account should be broken out on the invoice

• Provide the ability to restrict services and features in the LOB accounts, as defined by the company ' s governance policy

• Each LOB account should be delegated full administrator permissions regardless of the governance policy

Which combination of steps should the solutions architect take to meet these requirements ' ? (Select TWO.)

Options:

A.

Use AWS Organizations to create an organization in the parent account for each LOB Then invite each LOB account to the appropriate organization


B.

Use AWS Organizations to create a single organization in the parent account Then, invite each LOB ' s AWS account lo join the organization.


C.

Implement service quotas to define the services and features that are permitted and apply the quotas to each LOB. as appropriate


D.

Create an SCP that allows only approved services and features then apply the policy to the LOB accounts


E.

Enable consolidated billing in the parent account ' s billing console and link the LOB accounts


Expert Solution
Questions # 7:

A video processing company uses an AWS Lambda function to handle image processing tasks. An Amazon EventBridge rule that matches the event pattern when a new image is uploaded to an Amazon S3 bucket invokes the Lambda function. The processing task initially operated without errors.

The Lambda function now encounters frequent timeout errors. The Lambda function is configured with the maximum timeout value. A solutions architect must refactor the application’s architecture to mitigate invocation failures.

Which combination of steps will meet these requirements with the LEAST operational overhead? Select TWO.

Options:

A.

Build a Docker container image with the application code for deployment. Store the container image in Amazon ECR.


B.

Build a Docker container image with the application code for deployment. Store the container image in an S3 bucket with S3 Versioning enabled.


C.

Create a new Amazon ECS deployment with the Amazon EC2 launch type. Configure the ECS task definition to use the new Docker container image. Configure the Lambda function to invoke an ECS task by using the ECS task definition when a new file arrives in Amazon S3.


D.

Create a new Amazon ECS deployment with the Fargate launch type. Configure the ECS task definition to use the new Docker container image. Configure EventBridge to invoke an ECS task by using the ECS task definition.


E.

Create a new AWS Step Functions state machine. Configure the state machine to use the new Docker container image. Configure the Lambda function to invoke the state machine when a new file arrives in Amazon S3.


Expert Solution
Questions # 8:

A company runs an application on a fleet of Amazon EC2 instances that are in private subnets behind an internet-facing Application Load Balancer (ALB). The ALB is the origin for an Amazon CloudFront distribution. An AWS WAF web ACL that contains various AWS managed rules is associated with the CloudFront distribution.

The company needs a solution that will prevent internet traffic from directly accessing the ALB.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Create a new web ACL that contains the same rules that the existing web ACL contains. Associate the new web ACL with the ALB.


B.

Associate the existing web ACL with the ALB.


C.

Add a security group rule to the ALB to allow traffic from the AWS managed prefix list for CloudFront only.


D.

Add a security group rule to the ALB to allow only the various CloudFront IP address ranges.


Expert Solution
Questions # 9:

Question:

A company runs a Linux app on Amazon EKS usingM6iEC2 instances under a Savings Plan that is about to expire. They want toreduce costsafter expiration.

Options:

A.

Rebuild containers forARM64architecture.


B.

Rebuild containers for container compatibility (invalid/unclear).


C.

Migrate EKS nodes toGraviton(e.g., C7g, M7g).


D.

Replace nodes with latestx86_64instances.


E.

Purchase new Savings Plan for Graviton instance family.


F.

Purchase new Savings Plan for x86_64 instances.


Expert Solution
Questions # 10:

A company uses Amazon S3 to store files and images in a variety of storage classes. The company ' s S3 costs have increased substantially during the past year.

A solutions architect needs to review data trends for the past 12 months and identity the appropriate storage class for the objects.

Which solution will meet these requirements?

Options:

A.

Download AWS Cost and Usage Reports for the last 12 months of S3 usage. Review AWS Trusted Advisor recommendations for cost savings.


B.

Use S3 storage class analysis. Import data trends into an Amazon QuickSight dashboard to analyze storage trends.


C.

Use Amazon S3 Storage Lens. Upgrade the default dashboard to include advanced metrics for storage trends.


D.

Use Access Analyzer for S3. Download the Access Analyzer for S3 report for the last 12 months. Import the csvfile to an Amazon QuickSight dashboard.


Expert Solution
Questions # 11:

Question:

A company uses AWS Organizations and tags every resource with a BusinessUnit tag. They want toallocate cloud costsby business unit andvisualizethem.

Options:

Options:

A.

Activate BusinessUnit cost allocation tag in the management account. Create a CUR to S3. Use Athena + QuickSight for reporting.


B.

Create cost allocation tags in each member account. Use CloudWatch Dashboards.


C.

Create cost allocation tags in the management account. Deploy CURs per account.


D.

Use tags and CUR per account. Visualize with QuickSight from management account.


Expert Solution
Questions # 12:

A solutions architect is designing the data storage and retrieval architecture for a new application that a company will be launching soon. The application is designed to ingest millions of small records per minute from devices all around the world. Each record is less than 4 KB in size and needs to be stored in a durable location where it can be retrieved with low latency. The data is ephemeral and the company is required to store the data for 120 days only, after which the data can be deleted.

The solutions architect calculates that, during the course of a year, the storage requirements would be about 10-15 TB.

Which storage strategy is the MOST cost-effective and meets the design requirements?

Options:

A.

Design the application to store each incoming record as a single .csv file in an Amazon S3 bucket to allow for indexed retrieval. Configure a lifecycle policy to delete data older than 120 days.


B.

Design the application to store each incoming record in an Amazon DynamoDB table properly configured for the scale. Configure the DynamoOB Time to Live (TTL) feature to delete records older than 120 days.


C.

Design the application to store each incoming record in a single table in an Amazon RDS MySQL database. Run a nightly cron job that executes a query to delete any records older than 120 days.


D.

Design the application to batch incoming records before writing them to an Amazon S3 bucket. Update the metadata for the object to contain the list of records in the batch and use the Amazon S3 metadata search feature to retrieve the data. Configure a lifecycle policy to delete the data after 120 days.


Expert Solution
Questions # 13:

A company operates an on-premises software-as-a-service (SaaS) solution that ingests several files daily. The company provides multiple public SFTP endpoints to its customers to facilitate the file transfers. The customers add the SFTP endpoint IP addresses to their firewall allow list for outbound traffic. Changes to the SFTP endmost IP addresses are not permitted.

The company wants to migrate the SaaS solution to AWS and decrease the operational overhead of the file transfer service.

Which solution meets these requirements?

Options:

A.

Register the customer-owned block of IP addresses in the company ' s AWS account. Create Elastic IP addresses from the address pool and assign them to an AWS Transfer for SFTP endpoint. Use AWS Transfer to store the files in Amazon S3.


B.

Add a subnet containing the customer-owned block of IP addresses to a VPC Create Elastic IP addresses from the address pool and assign them to an Application Load Balancer (ALB). Launch EC2 instances hosting FTP services in an Auto Scaling group behind the ALB. Store the files in attached Amazon Elastic Block Store (Amazon EBS) volumes.


C.

Register the customer-owned block of IP addresses with Amazon Route 53. Create alias records in Route 53 that point to a Network Load Balancer (NLB). Launch EC2 instances hosting FTP services in an Auto Scaling group behind the NLB. Store the files in Amazon S3.


D.

Register the customer-owned block of IP addresses in the company ' s AWS account. Create Elastic IP addresses from the address pool and assign them to an Amazon S3 VPC endpoint. Enable SFTP support on the S3 bucket.


Expert Solution
Questions # 14:

A company has an application that uses an on-premises Oracle database. The company is migrating the database to the AWS Cloud. The database contains customer data and stored procedures.

The company needs to migrate the database as quickly as possible with minimum downtime. The solution on AWS must provide high availability and must use managed services for the database.

Which solution will meet these requirements?

Options:

A.

Use AWS DMS to replicate data from the on-premises Oracle database to a new Amazon RDS for Oracle database. Transfer the database files to an Amazon S3 bucket. Configure the RDS database to use the S3 bucket as database storage. Set up S3 replication for high availability. Redirect the application to the RDS DB instance.


B.

Create a database backup of the on-premises Oracle database. Upload the backup to an Amazon S3 bucket. Shut down the on-premises Oracle database to avoid any new transactions. Restore the backup to a new Oracle cluster that consists of Amazon EC2 instances across two Availability Zones. Redirect the application to the EC2 instances.


C.

Use AWS DMS to replicate data from the on-premises Oracle database to a new Amazon DynamoDB table. Use DynamoDB Accelerator (DAX) and implement global tables for high availability. Rewrite the stored procedures in AWS Lambda. Run the stored procedures in DAX. After replication, redirect the application to the DAX cluster endpoint.


D.

Use AWS DMS to replicate data from the on-premises Oracle database to a new Amazon Aurora PostgreSQL database. Use AWS SCT to convert the schema and stored procedures. Redirect the application to the Aurora DB cluster.


Expert Solution
Questions # 15:

A company is running an application that uses an Amazon ElastiCache for Redis cluster as a caching layer A recent security audit revealed that the company has configured encryption at rest for ElastiCache However the company did not configure ElastiCache to use encryption in transit Additionally, users can access the cache without authentication

A solutions architect must make changes to require user authentication and to ensure that the company is using end-to-end encryption

Which solution will meet these requirements?

Options:

A.

Create an AUTH token Store the token in AWS System Manager Parameter Store, as anencrypted parameter Create a new cluster with AUTH and configure encryption in transit Update the application to retrieve the AUTH token from Parameter Store when necessary and to use the AUTH token for authentication


B.

Create an AUTH token Store the token in AWS Secrets Manager Configure the existing cluster to use the AUTH token and configure encryption in transit Update the application to retrieve the AUTH token from Secrets Manager when necessary and to use the AUTH token for authentication.


C.

Create an SSL certificate Store the certificate in AWS Secrets Manager Create a new cluster and configure encryption in transit Update the application to retrieve the SSL certificate from Secrets Manager when necessary and to use the certificate for authentication.


D.

Create an SSL certificate Store the certificate in AWS Systems Manager Parameter Store, as an encrypted advanced parameter Update the existing cluster to configure encryption in transit Update the application to retrieve the SSL certificate from Parameter Store when necessary and to use the certificate for authentication


Expert Solution
Viewing page 1 out of 13 pages
Viewing questions 1-15 out of questions