79750395

Date: 2025-08-29 14:19:21
Score: 1
Natty:
Report link

When you’re talking about a 20 GB log file, you’ll definitely want to lean on S3’s multipart upload API. That’s what it’s built for: breaking a large file into smaller chunks (up to 10,000 parts), uploading them in parallel, and then having S3 stitch them back together on the backend. If any part fails, you can just retry that one chunk instead of the whole file.

Since the consuming application doesn’t want to deal with pre-signed URLs and can’t drop the file into a shared location, one pattern I’ve used is to expose an API endpoint in front of your service that acts as a broker:

That gives you a central place to send back success/failure notifications:

The trade-off is that your API tier is now in the data path, so you’ll need to size it appropriately (20 GB uploads aren’t small), and you’ll want to handle timeouts, retries, and maybe some form of flow control. But functionally, this avoids presigned URLs, avoids shared locations, and still gives you control over how/when to notify both sides of the result.

Reasons:
  • Blacklisted phrase (0.5): I need
  • Long answer (-1):
  • No code block (0.5):
  • Starts with a question (0.5): When you
  • Low reputation (0.5):
Posted by: Anil Tiwari