Pre-Summer Special Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: force70

Pass the Amazon Web Services AWS Certified Professional DOP-C02 Questions and answers with CertsForce

Viewing page 4 out of 13 pages
Viewing questions 31-40 out of questions
Questions # 31:

A company wants to build a pipeline to update the standard AMI monthly. The AMI must be updated to use the most recent patches to ensure that launched Amazon EC2 instances are up to date. Each new AMI must be available to all AWS accounts in the company ' s organization in AWS Organizations.

The company needs to configure an automated pipeline to build the AMI.

Which solution will meet these requirements with the MOST operational efficiency?

Options:

A.

Create an AWS CodePipeline pipeline that uses AWS CodeBuild. Create an AWS Lambda function to run the pipeline every month. Create an AWS CloudFormation template. Share the template with all AWS accounts in the organization.


B.

Create an AMI pipeline by using EC2 Image Builder. Configure the pipeline to distribute the AMI to the AWS accounts in the organization. Configure the pipeline to run monthly.


C.

Create an AWS CodePipeline pipeline that runs an AWS Lambda function to build the AMI. Configure the pipeline to share the AMI with the AWS accounts in the organization. Configure Amazon EventBridge Scheduler to invoke the pipeline every month.


D.

Create an AWS Systems Manager Automation runbook. Configure the automation to run in all AWS accounts in the organization. Create an AWS Lambda function to run the automation every month.


Expert Solution
Questions # 32:

A company is building a new pipeline by using AWS CodePipeline and AWS CodeBuild in a build account. The pipeline consists of two stages. The first stage is a CodeBuild job to build and package an AWS Lambda function. The second stage consists of deployment actions that operate on two different AWS accounts a development environment account and a production environment account. The deployment stages use the AWS Cloud Format ion action that CodePipeline invokes to deploy the infrastructure that the Lambda function requires.

A DevOps engineer creates the CodePipeline pipeline and configures the pipeline to encrypt build artifacts by using the AWS Key Management Service (AWS KMS) AWS managed key for Amazon S3 (the aws/s3 key). The artifacts are stored in an S3 bucket When the pipeline runs, the Cloud Formation actions fail with an access denied error.

Which combination of actions must the DevOps engineer perform to resolve this error? (Select TWO.)

Options:

A.

Create an S3 bucket in each AWS account for the artifacts Allow the pipeline to write to the S3 buckets. Create a CodePipeline S3 action to copy the artifacts to the S3 bucket in each AWS account Update the CloudFormation actions to reference the artifacts S3 bucket in the production account.


B.

Create a customer managed KMS key Configure the KMS key policy to allow the IAM roles used by the CloudFormation action to perform decrypt operations Modify the pipeline to use the customer managed KMS key to encrypt artifacts.


C.

Create an AWS managed KMS key Configure the KMS key policy to allow the development account and the production account to perform decrypt operations. Modify the pipeline to use the KMS key to encrypt artifacts.


D.

In the development account and in the production account create an IAM role for CodePipeline. Configure the roles with permissions to perform CloudFormation operations and with permissions to retrieve and decrypt objects from the artifacts S3 bucket. In the CodePipeline account configure the CodePipeline CloudFormation action to use the roles.


E.

In the development account and in the production account create an IAM role for CodePipeline Configure the roles with permissions to perform CloudFormationoperations and with permissions to retrieve and decrypt objects from the artifacts S3 bucket. In the CodePipelme account modify the artifacts S3 bucket policy to allow the roles access Configure the CodePipeline CloudFormation action to use the roles.


Expert Solution
Questions # 33:

A software engineering team is using AWS CodeDeploy to deploy a new version of an application. The team wants to ensure that if any issues arise during the deployment, the process can automatically roll back to the previous version.

During the deployment process, a health check confirms the application ' s stability. If the health check fails, the deployment must revert automatically.

Which solution will meet these requirements?

Options:

A.

Implement lifecycle event hooks in the deployment configuration.


B.

Use AWS CloudFormation to monitor the health of the deployment.


C.

Set up alarms in Amazon CloudWatch to start a rollback.


D.

Configure automatic rollback settings in AWS CodeDeploy.


Expert Solution
Questions # 34:

A company has multiple AWS accounts. The company uses AWS IAM Identity Center that is integrated with a third-party SAML 2.0 identity provider (IdP) .

The attributes for access control feature is enabled in IAM Identity Center. The attribute mapping list maps the department key from the IdP to the ${path:enterprise.department} attribute. All existing Amazon EC2 instances have a d1 , d2 , or d3 department tag that corresponds to three of the company’s departments.

A DevOps engineer must create policies based on the matching attributes. The policies must grant each user access to only the EC2 instances that are tagged with the user’s respective department name.

Which condition key should the DevOps engineer include in the custom permissions policies to meet these requirements?

Options:

A.

" Condition " : {

" ForAllValues:StringEquals " : {

" aws:TagKeys " : [ " department " ]

}

}


B.

" Condition " : {

" StringEquals " : {

" aws:PrincipalTag/department " : " ${aws:ResourceTag/department} "

}

}


C.

" Condition " : {

" StringEquals " : {

" ec2:ResourceTag/department " : " ${aws:PrincipalTag/department} "

}

}


D.

" Condition " : {

" ForAllValues:StringEquals " : {

" ec2:ResourceTag/department " : [ " d1 " , " d2 " , " d3 " ]

}

}


Expert Solution
Questions # 35:

A company manages multiple AWS accounts in AWS Organizations. The company ' s security policy states that AWS account root user credentials for member accounts must not be used. The company monitors access to the root user credentials.

A recent alert shows that the root user in a member account launched an Amazon EC2 instance. A DevOps engineer must create an SCP at the organization ' s root level that will prevent the root user in member accounts from making any AWS service API calls.

Which SCP will meet these requirements?

A)

Question # 35

B)

C)

Question # 35

D)

Question # 35

Options:

A.

Option A


B.

Option B


C.

Option C


D.

Option D


Expert Solution
Questions # 36:

A DevOps engineer needs to implement a CI/CD pipeline that uses AWS CodeBuild to run a test suite. The test suite contains many test cases and takes a long time to finish running. The DevOps engineer wants to reduce the duration to run the tests. However, the DevOps engineer still wants to generate a single test report for all the test cases.

Which solution will meet these requirements?

Options:

A.

Run the test suite in a batch build type of build matrix by using the codebuild-tests-run command.


B.

Run the test suite in a batch build type of build fanout by using the codebuild-tests-run command.


C.

Run the test suite in a batch build type of build list by using different subsets of the test cases.


D.

Run the test suite in a batch build type of build graph by using different subsets of the test cases.


Expert Solution
Questions # 37:

A company wants to use a grid system for a proprietary enterprise m-memory data store on top of AWS. This system can run in multiple server nodes in any Linux-based distribution. The system must be able to reconfigure the entire cluster every time a node is added or removed. When adding or removing nodes an /etc./cluster/nodes config file must be updated listing the IP addresses of the current node members of that cluster.

The company wants to automate the task of adding new nodes to a cluster.

What can a DevOps engineer do to meet these requirements?

Options:

A.

Use AWS OpsWorks Stacks to layer the server nodes of that cluster. Create a Chef recipe that populates the content of the ' etc./cluster/nodes config file and restarts the service by using the current members of the layer. Assign that recipe to the Configure lifecycle event.


B.

Put the file nodes config in version control. Create an AWS CodeDeploy deployment configuration and deployment group based on an Amazon EC2 tag value for thecluster nodes. When adding a new node to the cluster update the file with all tagged instances and make a commit in version control. Deploy the new file and restart the services.


C.

Create an Amazon S3 bucket and upload a version of the /etc./cluster/nodes config file Create a crontab script that will poll for that S3 file and download it frequently. Use a process manager such as Monit or system, to restart the cluster services when it detects that the new file was modified. When adding a node to the cluster edit the file ' s most recent members Upload the new file to the S3 bucket.


D.

Create a user data script that lists all members of the current security group of the cluster and automatically updates the /etc/cluster/. nodes config. Tile whenever a new instance is added to the cluster.


Expert Solution
Questions # 38:

A company requires an RPO of 2 hours and an RTO of 10 minutes for its data and application at all times. An application uses a MySQL database and Amazon EC2 web servers. The development team needs a strategy for failover and disaster recovery.

Which combination of deployment strategies will meet these requirements? (Select TWO.)

Options:

A.

Create an Amazon Aurora cluster in one Availability Zone across multiple Regions as the data store Use Aurora ' s automatic recovery capabilities in the event of a disaster


B.

Create an Amazon Aurora global database in two Regions as the data store. In the event of a failure promote the secondary Region as the primary for the application.


C.

Create an Amazon Aurora multi-master cluster across multiple Regions as the data store. Use a Network Load Balancer to balance the database traffic in different Regions.


D.

Set up the application in two Regions and use Amazon Route 53 failover-based routing that points to the Application Load Balancers in both Regions. Use hearth checks to determine the availability in a given Region. Use Auto Scaling groups in each Region to adjust capacity based on demand.


E.

Set up the application m two Regions and use a multi-Region Auto Scaling group behind Application Load Balancers to manage the capacity based on demand. In the event of a disaster adjust the Auto Scaling group ' s desired instance count to increase baseline capacity in the failover Region.


Expert Solution
Questions # 39:

A company is running its ecommerce website on AWS. The website is currently hosted on a single Amazon EC2 instance in one Availability Zone. A MySQL database runs on the same EC2 instance. The company needs to eliminate single points of failure in the architecture to improve the website ' s availability and resilience. Which solution will meet these requirements with the LEAST configuration changes to the website?

Options:

A.

Deploy the application by using AWS Fargate containers. Migrate the database to Amazon DynamoDB. Use Amazon API Gateway to route requests.


B.

Deploy the application on EC2 instances across multiple Availability Zones. Put the EC2 instances into an Auto Scaling group behind an Application Load Balancer. Migrate the database to Amazon Aurora Multi-AZ. Use Amazon CloudFront for content delivery.


C.

Use AWS Elastic Beanstalk to deploy the application across multiple AWS Regions. Migrate the database to Amazon Redshift. Use Amazon ElastiCache for session management.


D.

Migrate the application to AWS Lambda functions. Use Amazon S3 for static content hosting. Migrate the database to Amazon DocumentDB (with MongoDB compatibility).


Expert Solution
Questions # 40:

A company uses Amazon S3 to store proprietary information. The development team creates buckets for new projects on a daily basis. The security team wants to ensure that all existing and future buckets have encryption logging and versioning enabled. Additionally, no buckets should ever be publicly read or write accessible.

What should a DevOps engineer do to meet these requirements?

Options:

A.

Enable AWS CloudTrail and configure automatic remediation using AWS Lambda.


B.

Enable AWS Conflg rules and configure automatic remediation using AWS Systems Manager documents.


C.

Enable AWS Trusted Advisor and configure automatic remediation using Amazon EventBridge.


D.

Enable AWS Systems Manager and configure automatic remediation using Systems Manager documents.


Expert Solution
Viewing page 4 out of 13 pages
Viewing questions 31-40 out of questions