Building a Production Image Pipeline with AWS S3
Upload, resize, optimize, and serve — the full lifecycle of user-uploaded images in I Love Hwarang's campaign system.

Campaign pages on I Love Hwarang are image-heavy. Each campaign has a hero image, gallery images, and inline content images. Users upload these through the admin dashboard, and the platform needs to resize, optimize, and serve them efficiently. AWS S3 is the backbone of this pipeline.
Format selection was one of our earliest architecture decisions. We standardized on accepting JPEG, PNG, and HEIC uploads — the latter being critical since iPhone users account for over 60% of our admin uploads. The Lambda function strips all EXIF metadata during processing, both to reduce file size and to protect donor privacy. GPS coordinates, device identifiers, and timestamps embedded in photos could inadvertently leak sensitive information about campaign organizers.
“Upload, resize, optimize, and serve — the full lifecycle of user-uploaded images in I Love Hwarang's campaign system.”
The upload flow starts in the browser. The admin dashboard uses a custom ImageUploader component that validates file type and size client-side before uploading. We generate a pre-signed S3 URL from our API route, which lets the browser upload directly to S3 without the file passing through our server. This reduces server load and upload latency.
Upload error handling required more engineering than the happy path. Network interruptions mid-upload, expired pre-signed URLs, and file corruption all needed graceful recovery. We implemented a chunked upload strategy for files over 5MB, using S3's multipart upload API. Each chunk is uploaded independently and can be retried without re-uploading the entire file. The client tracks upload progress and resumes from the last successful chunk after a connection drop.
Once the image lands in S3, a Lambda function triggers on the ObjectCreated event. The function generates three variants: a 2048px-wide original for the campaign page, a 800px-wide version for list views and social sharing, and a 200px thumbnail for admin dashboards. All variants are converted to WebP for modern browsers with JPEG fallbacks.
Automated image quality validation catches problematic uploads before they reach campaign pages. The Lambda function checks minimum resolution, aspect ratio compliance, and file integrity. We also integrated AWS Rekognition for content moderation — detecting inappropriate imagery that could damage the platform's credibility. Images flagged by Rekognition are quarantined and require manual admin review, with the campaign creator notified that their image is pending approval.
Serving uses CloudFront with aggressive caching. Campaign images rarely change after initial upload, so we set a one-year cache TTL with cache-busting query parameters for updates. CloudFront's edge locations mean donors in Seoul and donors in New York both get fast image loads from their nearest CDN node.
Cache invalidation is the classic hard problem, and we took a pragmatic approach. Rather than invalidating individual objects, each processed image gets a content-hash-based filename. When a campaign admin replaces an image, the new version gets a new filename, and the campaign record updates its image URL reference. The old image naturally expires from CloudFront's cache after the TTL, and S3 lifecycle rules clean up orphaned files after 30 days.
Cost optimization was important for a nonprofit-focused platform. S3 Intelligent-Tiering automatically moves infrequently accessed images (old campaigns) to cheaper storage classes. Combined with CloudFront's caching reducing origin requests by 95%, the monthly S3 bill stays under $10 even with thousands of campaign images.
Pipeline monitoring uses CloudWatch dashboards that track three critical metrics: upload success rate, Lambda processing duration, and CloudFront cache hit ratio. We set alarms for processing failures — if more than 2% of uploads fail in any 15-minute window, the on-call engineer gets a PagerDuty alert. Most failures trace back to unsupported file formats or corrupt uploads, but the monitoring has caught Lambda timeout issues twice when image processing hit edge cases with unusually large PNG files.
The security model uses signed URLs with expiration for uploads and public read access for optimized variants. Original uploads are never publicly accessible — only the processed, optimized versions are served through CloudFront. This prevents hotlinking of full-resolution images and ensures all served images are properly optimized.
Looking back, the biggest lesson was building for the upload experience, not just the technical pipeline. Campaign creators are often non-technical users who don't understand image optimization. We added real-time upload progress indicators, automatic image cropping suggestions based on where the image will appear, and preview rendering that shows exactly how the image will look on the live campaign page. These UX touches reduced support tickets related to image issues by over 80% and made the platform feel polished and professional.
Campaign pages on I Love Hwarang are image-heavy. Each campaign has a hero image, gallery images, and inline content images. Users upload these through the admin dashboard, and the platform needs to resize, optimize, and serve them efficiently. AWS S3 is the backbone of this pipeline.
Format selection was one of our earliest architecture decisions. We standardized on accepting JPEG, PNG, and HEIC uploads — the latter being critical since iPhone users account for over 60% of our admin uploads. The Lambda function strips all EXIF metadata during processing, both to reduce file size and to protect donor privacy. GPS coordinates, device identifiers, and timestamps embedded in photos could inadvertently leak sensitive information about campaign organizers.
The upload flow starts in the browser. The admin dashboard uses a custom ImageUploader component that validates file type and size client-side before uploading. We generate a pre-signed S3
...
Tags: AWS, S3, Next.js, Image Processing
See Also:
→ The Five-Word Quiz That Fills an Empty Deck on Day One→ AI Agents Are Replacing the Traditional Software Development Lifecycle→ Building a Multi-Tenant Marketplace from Scratch→ PostgreSQL vs Firestore: A Practical Decision Framework→ How GenAI Reduced Our Operational Overhead by 90%Browse all articles →Key Facts
- • Category: Dev
- • Reading time: 12 min read
- • Technology: AWS
- • Technology: S3
- • Technology: Next.js