When you’re talking about a 20 GB log file, you’ll definitely want to lean on S3’s multipart upload API. That’s what it’s built for: breaking a large file into smaller chunks (up to 10,000 parts), uploading them in parallel, and then having S3 stitch them back together on the backend. If any part fails, you can just retry that one chunk instead of the whole file.
Since the consuming application doesn’t want to deal with pre-signed URLs and can’t drop the file into a shared location, one pattern I’ve used is to expose an API endpoint in front of your service that acts as a broker:
The app calls your API and says “I need to send logs.”
Your service kicks off a multipart upload against S3 using your AWS credentials (so the app never touches S3 directly).
The app streams the file (or pushes chunks through your API), and your service forwards them to S3 using the multipart upload ID.
Once all parts are in, your service finalizes the upload with S3.
That gives you a central place to send back success/failure notifications:
On successful completion, your service can push a message (SNS, SQS, webhook, whatever makes sense) to both your system and the caller.
On error, you can emit a corresponding failure event.
The trade-off is that your API tier is now in the data path, so you’ll need to size it appropriately (20 GB uploads aren’t small), and you’ll want to handle timeouts, retries, and maybe some form of flow control. But functionally, this avoids presigned URLs, avoids shared locations, and still gives you control over how/when to notify both sides of the result.