Winter Sale Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: pass65

Amazon Web Services AWS Certified Data Engineer - Associate (DEA-C01) Data-Engineer-Associate Question # 37 Topic 4 Discussion

Amazon Web Services AWS Certified Data Engineer - Associate (DEA-C01) Data-Engineer-Associate Question # 37 Topic 4 Discussion

Data-Engineer-Associate Exam Topic 4 Question 37 Discussion:
Question #: 37
Topic #: 4

A company is planning to use a provisioned Amazon EMR cluster that runs Apache Spark jobs to perform big data analysis. The company requires high reliability. A big data team must follow best practices for running cost-optimized and long-running workloads on Amazon EMR. The team must find a solution that will maintain the company's current level of performance.

Which combination of resources will meet these requirements MOST cost-effectively? (Choose two.)


A.

Use Hadoop Distributed File System (HDFS) as a persistent data store.


B.

Use Amazon S3 as a persistent data store.


C.

Use x86-based instances for core nodes and task nodes.


D.

Use Graviton instances for core nodes and task nodes.


E.

Use Spot Instances for all primary nodes.


Get Premium Data-Engineer-Associate Questions

Contribute your Thoughts:


Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.