The most resilient and scalable architecture for handling millions of high-resolution images with frequent updates is to store the binary images in Amazon S3 and store their metadata or reference (geographic code and S3 URL) in Amazon DynamoDB.
From AWS Documentation:
“Store large objects like images in Amazon S3 and use Amazon DynamoDB to store metadata and references. This design pattern is scalable, highly available, and cost-effective.”
(Source: AWS Architecture Blog – Best Practices for Handling Large Objects)
Why B is correct:
Amazon S3 is designed for storing large volumes of binary data (images).
DynamoDB provides low-latency reads/writes using the geographic code as the partition key.
Highly available, serverless, and auto-scaling, suitable for disaster scenarios with bursts of activity.
Reduces pressure on the database layer by separating metadata from image storage.
Why the others are incorrect:
Option A and D: Storing images directly in RDS is expensive, unscalable, and not optimal for binary storage.
Option C: DynamoDB is suitable, but storing actual binary image data in DynamoDB is not best practice due to item size limits (400 KB) and performance concerns.
[References:, AWS Architecture Blog – “Best Practices for Amazon S3 and DynamoDB Integration”, AWS Well-Architected Framework – Resilience Pillar, Amazon DynamoDB Developer Guide, , , ]
Submit