Winter Sale Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: pass65

Amazon Web Services AWS Certified Data Engineer - Associate (DEA-C01) Data-Engineer-Associate Question # 55 Topic 6 Discussion

Amazon Web Services AWS Certified Data Engineer - Associate (DEA-C01) Data-Engineer-Associate Question # 55 Topic 6 Discussion

Data-Engineer-Associate Exam Topic 6 Question 55 Discussion:
Question #: 55
Topic #: 6

A data engineer needs to build an extract, transform, and load (ETL) job. The ETL job will process daily incoming .csv files that users upload to an Amazon S3 bucket. The size of each S3 object is less than 100 MB.

Which solution will meet these requirements MOST cost-effectively?


A.

Write a custom Python application. Host the application on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster.


B.

Write a PySpark ETL script. Host the script on an Amazon EMR cluster.


C.

Write an AWS Glue PySpark job. Use Apache Spark to transform the data.


D.

Write an AWS Glue Python shell job. Use pandas to transform the data.


Get Premium Data-Engineer-Associate Questions

Contribute your Thoughts:


Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.