Pre-Summer Special Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: force70

Pass the Amazon Web Services AWS Certified Associate SAA-C03 Questions and answers with CertsForce

Viewing page 8 out of 14 pages
Viewing questions 141-160 out of questions
Questions # 141:

A company needs a solution to ingest streaming sensor data from 100,000 devices, transform the data in near real time, and load the data into Amazon S3 for analysis. The solution must be fully managed, scalable, and maintain sub-second ingestion latency.

Options:

A.

Use Amazon Kinesis Data Streams to ingest the data. Use Amazon Managed Service for Apache Flink to process the data in near real time. Use an Amazon Data Firehose stream to send processed data to Amazon S3.


B.

Use Amazon Simple Queue Service (Amazon SQS) standard queues to collect the sensor data. Invoke AWS Lambda functions to transform and process SQS messages in batches. Configure the Lambda functions to use an AWS SDK to write transformed data to Amazon S3.


C.

Deploy a fleet of Amazon EC2 instances that run Apache Kafka to ingest the data. Run Apache Spark on Amazon EMR clusters to process the data. Configure Spark to write processed data directly to Amazon S3.


D.

Implement Amazon EventBridge to capture all sensor data. Use AWS Batch to run containerized transformation jobs on a schedule. Configure AWS Batch jobs to process data in chunks. Save results to Amazon S3.


Expert Solution
Questions # 142:

A global ecommerce company is planning to enhance its AWS data storage architecture to improve system availability and resilience.

The company handles millions of daily transactions in relational form. It stores unstructured data in the form of images over 4 MB in size.

The solution must provide continuous operation in multiple geographic locations, minimize downtime/data loss, and support both transactional and unstructured data.

Which solution will meet these requirements?

Options:

A.

Use Amazon RDS Multi-AZ deployments for transaction data. Use Amazon DynamoDB global tables for unstructured data.


B.

Use an Amazon Aurora global database for transaction data. Use Amazon S3 with Cross-Region Replication for unstructured data.


C.

Use Amazon DynamoDB global tables for both transaction data and unstructured data.


D.

Use an Amazon Aurora global database for transaction data. Use Amazon Elastic File System (Amazon EFS) with Cross-Region Replication for unstructured data.


Expert Solution
Questions # 143:

An ecommerce company is launching a new marketing campaign. The company anticipates the campaign to generate ten times the normal number of daily orders through the company ' s ecommerce application. The campaign will last 3 days.

The ecommerce application architecture is based on Amazon EC2 instances in an Auto Scaling group and an Amazon RDS for MySQL database. The application writes order transactions to an Amazon Elastic File System (Amazon EFS) file system before the application writes orders to the database. During normal operations, the application write operations peak at 5,000 IOPS.

A solutions architect needs to ensure that the application can handle the anticipated workload during the marketing campaign.

Which solution will meet this requirement?

Options:

A.

For the duration of the campaign, increase the provisioned IOPS for the RDS for MySQL database. Set the Amazon EFS throughput mode to Bursting throughput.


B.

For the duration of the campaign, increase the provisioned IOPS for the RDS for MySQL database. Set the Amazon EFS throughput mode to Elastic throughput.


C.

Convert the database to a Multi-AZ deployment. Set the Amazon EFS throughput mode to Elastic throughput for the duration of the campaign.


D.

Use AWS Database Migration Service (AWS DMS) to convert the database to RDS for PostgreSQL. Set the Amazon EFS throughput mode to Bursting throughput.


Expert Solution
Questions # 144:

A company is building a serverless application that processes large volumes of data from a mobile app. A Lambda function processes the data and stores it in DynamoDB. The company must ensure the application can recover from failures and continue processing without losing records.

Which solution will meet these requirements?

Options:

A.

Configure the Lambda function with a dead-letter queue (DLQ) using SQS. Retry failed records from the DLQ with exponential backoff.


B.

Configure the Lambda function to read records from Amazon Data Firehose. Replay Firehose records in case of failures.


C.

Use Amazon OpenSearch Service to store failed records. Configure Lambda to retry failed records from OpenSearch. Use EventBridge for orchestration.


D.

Use Amazon SNS to store failed records. Configure Lambda to retry records from SNS. Use API Gateway to orchestrate retries.


Expert Solution
Questions # 145:

A company runs several applications on Amazon EC2 instances. The company stores configuration files in an Amazon S3 bucket.

A solutions architect must provide the company ' s applications with access to the configuration files. The solutions architect must follow AWS best practices for security.

Which solution will meet these requirements?

Options:

A.

Use the AWS account root user access keys.


B.

Use the AWS access key ID and the EC2 secret access key.


C.

Use an IAM role to grant the necessary permissions to the applications.


D.

Activate multi-factor authentication (MFA) and versioning on the S3 bucket.


Expert Solution
Questions # 146:

A media streaming company needs to deploy its video processing application across multiple Availability Zones for high availability. The application consists of containerized microservices that process video files. The microservices must automatically recover from failures.

Which solution meets these requirements with the LEAST operational overhead?

Options:

A.

Deploy the containers to Amazon ECS with the EC2 launch type.


B.

Deploy the containers to Amazon EKS with self-managed nodes.


C.

Deploy the containers to Amazon ECS with the Fargate launch type.


D.

Deploy the containers directly to Amazon EC2 instances.


Expert Solution
Questions # 147:

A company launches a new web application that uses an Amazon Aurora PostgreSQL database. The company wants to add new features to the application that rely on AI. The company requires vector storage capability to use AI tools.

Which solution will meet this requirement MOST cost-effectively?

Options:

A.

Use Amazon OpenSearch Service to create an OpenSearch service. Configure the application to write vector embeddings to a vector index.


B.

Create an Amazon DocumentDB cluster. Configure the application to write vector embeddings to a vector index.


C.

Create an Amazon Neptune ML cluster. Configure the application to write vector embeddings to a vector graph.


D.

Install the pgvector extension on the Aurora PostgreSQL database. Configure the application to write vector embeddings to a vector table.


Expert Solution
Questions # 148:

A company wants to use a data lake that is hosted on Amazon S3 to provide analytics services for historical data. The data lake consists of 800 tables but is expected to grow to thousands of tables. More than 50 departments use the tables, and each department has hundreds of users. Different departments need access to specific tables and columns.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Create an IAM role for each department. Use AWS Lake Formation based access control to grant each IAM role access to specific tables and columns. Use Amazon Athena to analyze the data.


B.

Create an Amazon Redshift cluster for each department. Use AWS Glue to ingest into the Redshift cluster only the tables and columns that are relevant to that department. Create Redshift database users. Grant the users access to the relevant department ' s Redshift cluster. Use Amazon Redshift to analyze the data.


C.

Create an IAM role for each department. Use AWS Lake Formation tag-based access control to grant each IAM role access to only the relevant resources. Create LF-tags that are attached to tables and columns. Use Amazon Athena to analyze the data.


D.

Create an Amazon EMR cluster for each department. Configure an IAM service role for each EMR cluster to access relevant S3 files. For each department ' s users, create an IAM role that provides access to the relevant EMR cluster. Use Amazon EMR to analyze the data.


Expert Solution
Questions # 149:

A company is storing data in Amazon S3 buckets. The company needs to retain any objects that contain personally identifiable information (PII) that might need to be reviewed.

A solutions architect must develop an automated solution to identify objects that contain PII and apply the necessary controls to prevent deletion before review.

Which combination of steps should the solutions architect take to meet these requirements? (Select THREE.)

Options:

A.

Create a job in Amazon Macie to scan the S3 buckets for the relevant sensitive data identifiers.


B.

Move the identified objects to the S3 Glacier Deep Archive storage class.


C.

Create an AWS Lambda function that performs an S3 Object Lock legal hold operation on the identified objects.


D.

Create an AWS Lambda function that applies an S3 Object Lock retention period to the identified objects in governance mode.


E.

Create an Amazon EventBridge rule that invokes the AWS Lambda function when Amazon Macie detects sensitive data.


F.

Configure multi-factor authentication (MFA) delete on the S3 buckets.


Expert Solution
Questions # 150:

A company runs an application on an Amazon ECS cluster that uses AWS Fargate On-Demand capacity. The application cannot tolerate any sudden interruptions. The company wants to optimize costs for the application and ensure that the application remains operational.

Which solution will meet these requirements?

Options:

A.

Create an On-Demand Capacity Reservation.


B.

Purchase Convertible Reserved Instances.


C.

Use Fargate Spot capacity instead of On-Demand capacity with a rolling update deployment type.


D.

Purchase a Compute Savings Plan.


Expert Solution
Questions # 151:

A company generates approximately 20 GB of data multiple times each day. The company uses AWS DataSync to copy all data from on-premises storage to Amazon S3 every 6 hours for further processing. The analytics team wants to modify the copy process to copy only data relevant to the analytics team and ignore the rest of the data. The team wants to copy data as soon as possible and receive a notification when the copy process is finished. Which combination of steps will meet these requirements MOST cost-effectively? (Select THREE.)

Options:

A.

Modify the data generation process on-premises to create a manifest file at the end of the copy process with the names of the objects to be copied to Amazon S3. Create a custom script to upload the manifest file to an S3 bucket.


B.

Modify the data generation process on-premises to create a manifest file at the end of the copy process with the names of the objects to be copied to Amazon S3. Create an AWS Lambda function to load the manifest file data into an Amazon DynamoDB table.


C.

Create an AWS Lambda function that Amazon EventBridge invokes when the manifest file is loaded into Amazon DynamoDB. Configure the Lambda function to copy the data from on-premises storage to the S3 bucket that uses the manifest file.


D.

Create an AWS Lambda function that an S3 Event Notification invokes when the manifest file is uploaded. Configure the Lambda function to invoke the DataSync task by calling the StartTaskExecution API action with a manifest.


E.

Create an Amazon Simple Notification Service (Amazon SNS) topic. Create an Amazon EventBridge rule to send an email notification to the SNS topic when the DataSync task execution status changes to SUCCESS or to ERROR.


F.

Create an Amazon Simple Notification Service (Amazon SNS) topic. Create an AWS Lambda function to send an email notification to the SNS topic when the DataSync task execution status changes to SUCCESS or to ERROR.


Expert Solution
Questions # 152:

A company uses an Amazon CloudFront distribution to serve thousands of media files to users. The CloudFront distribution uses a private Amazon S3 bucket as an origin.

A solutions architect must prevent users in specific countries from accessing the company ' s files.

Which solution will meet these requirements in the MOST operationally-efficient way?

Options:

A.

Require users to access the files by using CloudFront signed URLs.


B.

Configure geographic restrictions in CloudFront.


C.

Require users to access the files by using CloudFront signed cookies.


D.

Configure an origin access control (OAC) between CloudFront and the S3 bucket.


Expert Solution
Questions # 153:

A company hosts a photo sharing web application on AWS. Users upload and share thousands of photos each hour. The company needs a durable storage solution that provides retrieval mechanisms for the photos. Most uploaded photos are not accessed often after 30 days, but the company does not want to delete older photos.

Which solution will meet these requirements in the MOST cost-effective way?

Options:

A.

Store the photos in an Amazon EFS file system for immediate use. Use AWS Backup with on-demand backups and point-in-time recovery PITR to store photos that are older than 30 days.


B.

Store the photos in an Amazon S3 bucket. Use Amazon S3 Lifecycle configurations to move photos that are older than 30 days to S3 Intelligent-Tiering.


C.

Store the photos in Amazon DynamoDB for immediate use. Use AWS Backup with on-demand backups and point-in-time recovery PITR to store photos that are older than 30 days.


D.

Store the photos in Amazon FSx for Lustre for immediate use. Use AWS Backup with continuous backups and point-in-time recovery PITR to store photos that are older than 30 days.


Expert Solution
Questions # 154:

A company needs to design a hybrid network architecture The company ' s workloads are currently stored in the AWS Cloud and in on-premises data centers The workloads require single-digit latencies to communicate The company uses an AWS Transit Gateway transit gateway to connect multiple VPCs

Which combination of steps will meet these requirements MOST cost-effectively? (Select TWO.)

Options:

A.

Establish an AWS Site-to-Site VPN connection to each VPC.


B.

Associate an AWS Direct Connect gateway with the transit gateway that is attached to the VPCs.


C.

Establish an AWS Site-to-Site VPN connection to an AWS Direct Connect gateway.


D.

Establish an AWS Direct Connect connection. Create a transit virtual interface (VIF) to a Direct Connect gateway.


E.

Associate AWS Site-to-Site VPN connections with the transit gateway that is attached to the VPCs


Expert Solution
Questions # 155:

A company provides a trading platform to customers. The platform uses an Amazon API Gateway REST API, AWS Lambda functions, and an Amazon DynamoDB table. Each trade that the platform processes invokes a Lambda function that stores the trade data in Amazon DynamoDB. The company wants to ingest trade data into a data lake in Amazon S3 for near real-time analysis. Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Use Amazon DynamoDB Streams to capture the trade data changes. Configure DynamoDB Streams to invoke a Lambda function that writes the data to Amazon S3.


B.

Use Amazon DynamoDB Streams to capture the trade data changes. Configure DynamoDB Streams to invoke a Lambda function that writes the data to Amazon Data Firehose. Write the data from Data Firehose to Amazon S3.


C.

Enable Amazon Kinesis Data Streams on the DynamoDB table to capture the trade data changes. Configure Kinesis Data Streams to invoke a Lambda function that writes the data to Amazon S3.


D.

Enable Amazon Kinesis Data Streams on the DynamoDB table to capture the trade data changes. Configure a data stream to be the input for Amazon Data Firehose. Write the data from Data Firehose to Amazon S3.


Expert Solution
Questions # 156:

A solutions architect needs to optimize storage costs. The solutions architect must identify any Amazon S3 buckets that are no longer being accessed or are rarely accessed.

Which solution will accomplish this goal with the LEAST operational overhead?

Options:

A.

Analyze bucket access patterns by using the S3 Storage Lens dashboard for advanced activity metrics.


B.

Analyze bucket access patterns by using the S3 dashboard in the AWS Management Console.


C.

Turn on the Amazon CloudWatch BucketSizeBytes metric for buckets. Analyze bucket access patterns by using the metrics data with Amazon Athena.


D.

Turn on AWS CloudTrail for S3 object monitoring. Analyze bucket access patterns by using CloudTrail logs that are integrated with Amazon CloudWatch Logs.


Expert Solution
Questions # 157:

A company is building a cloud-based application on AWS that will handle sensitive customer data. The application uses Amazon RDS for the database, Amazon S3 for object storage, and S3 Event Notifications that invoke AWS Lambda for serverless processing.

The company uses AWS IAM Identity Center to manage user credentials. The development, testing, and operations teams need secure access to Amazon RDS and Amazon S3 while ensuring the confidentiality of sensitive customer data. The solution must comply with the principle of least privilege.

Which solution meets these requirements with the LEAST operational overhead?

Options:

A.

Use IAM roles with least privilege to grant all the teams access. Assign IAM roles to each team with customized IAM policies defining specific permission for Amazon RDS and S3 object access based on team responsibilities.


B.

Enable IAM Identity Center with an Identity Center directory. Create and configure permission sets with granular access to Amazon RDS and Amazon S3. Assign all the teams to groups that have specific access with the permission sets.


C.

Create individual IAM users for each member in all the teams with role-based permissions. Assign the IAM roles with predefined policies for RDS and S3 access to each user based on user needs. Implement IAM Access Analyzer for periodic credential evaluation.


D.

Use AWS Organizations to create separate accounts for each team. Implement cross-account IAM roles with least privilege. Grant specific permission for RDS and S3 access based on team roles and responsibilities.


Expert Solution
Questions # 158:

A consulting company provides professional services to customers worldwide. The company provides solutions and tools for customers to expedite gathering and analyzing data on AWS. The company needs to centrally manage and deploy a common set of solutions and tools for customers to use for self-service purposes.

Which solution will meet these requirements?

Options:

A.

Create AWS Cloud Formation templates for the customers.


B.

Create AWS Service Catalog products for the customers.


C.

Create AWS Systems Manager templates for the customers.


D.

Create AWS Config items for the customers.


Expert Solution
Questions # 159:

A media company hosts a web application on AWS for uploading videos. Only authenticated users should upload within a specified time frame after authentication.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Configure the application to generate IAM temporary security credentials for authenticated users.


B.

Create an AWS Lambda function that generates pre-signed URLs when a user authenticates.


C.

Develop a custom authentication service that integrates with Amazon Cognito to control and log direct S3 bucket access through the application.


D.

Use AWS Security Token Service (AWS STS) to assume a pre-defined IAM role that grants authenticated users temporary permissions to upload videos directly to the S3 bucket.


Expert Solution
Questions # 160:

A company is designing a solution to capture customer activity on the company ' s web applications. The company wants to analyze the activity data to make predictions.

Customer activity on the web applications is unpredictable and can increase suddenly. The company requires a solution that integrates with other web applications. The solution must include an authorization step.

Which solution will meet these requirements?

Options:

A.

Deploy a Gateway Load Balancer (GWLB) in front of an Amazon Elastic Container Service (Amazon ECS) container instance. Store the data in an Amazon Elastic File System (Amazon EFS) file system. Configure the applications to pass an authorization header to the GWLB.


B.

Deploy an Amazon API Gateway endpoint in front of an Amazon Kinesis data stream. Store the data in an Amazon S3 bucket. Use an AWS Lambda function to handle authorization.


C.

Deploy an Amazon API Gateway endpoint in front of an Amazon Data Firehose delivery stream. Store the data in an Amazon S3 bucket. Use an API Gateway Lambda authorizer to handle authorization.


D.

Deploy a Gateway Load Balancer (GWLB) in front of an Amazon Elastic Container Service (Amazon ECS) container instance. Store the data in an Amazon Elastic File System (Amazon EFS) file system. Use an AWS Lambda function to handle authorization.


Expert Solution
Viewing page 8 out of 14 pages
Viewing questions 141-160 out of questions