How i handled large file upload using AWS S3 (Multipart Upload + Presigned URLs)

The Day My API Broke (And What S3 Taught Me About File Uploads)

It started like any normal day.

I had just finished building a feature where users could upload files—images, PDFs, even short videos. Nothing too crazy. I wired up an API endpoint, tested it with a few files, and everything worked perfectly.

I thought, “Yeah, this is done.”


🚨 Then Reality Hit

Within a few hours, things started breaking.

Users were complaining:

  • “Upload failed”

  • “It’s stuck at 90%”

  • “Why is this so slow?”

At first, I assumed it was a frontend issue. But no—the backend logs told a different story.

  • Requests were timing out

  • Memory usage was spiking

  • Some uploads were just… disappearing

And then I saw it:

413 Payload Too Large

That’s when it hit me—my API wasn’t built for this.


🧠 The Problem I Didn’t See Coming

Here’s what I had done:

User → API → Server → Storage

Seems logical, right?

But what I didn’t realize was:

  • Every file was going through my server

  • Large files were eating up memory

  • Multiple users = multiple heavy requests

My backend had unknowingly become a traffic jam.

Because my thought process was  " A user comes and uploads a file or media and server takes up and uploads to s3 because we use AWS. so even on my testing phase and actual development i kind got maximum file be 30 - 40mb. where i even made sure that each one be one on one upload like a series upload." my thought only 40mb limit because as per our app feature normally a user don't have any thing that makes sense to upload a file larger than limit of 40mb adding in backend.  

🔍 The Turning Point

After digging through forums and documentation, I came across a concept that completely changed how I think about uploads:

“Why is your backend even touching the file?”

That question stuck with me.

And that’s when I discovered a better way.


⚡ Enter S3 (The Game Changer)

Instead of acting as a middleman, my backend could simply step aside.

New flow:

User → S3 (direct upload)
Backend → just gives permission

Wait… what?

Yes.

Instead of uploading files to my API, I started generating something called a pre-signed URL.


🔑 The Magic of Pre-Signed URLs

Here’s how it works (in simple terms):

  1. User asks backend: “Can I upload a file?”

  2. Backend says: “Sure, here’s a special URL (valid for a few minutes)”

  3. User uploads file directly to S3 using that URL

  4. Backend only stores the file link

No heavy lifting. No bottleneck.

Just clean, scalable architecture.

🛠️ Implementation Guide (Step-by-Step)


1️⃣ Backend: Generate Pre-Signed URL (Django)

Install dependency:

pip install boto3

Code:

import boto3
import uuid
from django.http import JsonResponse

s3 = boto3.client(
    's3',
    aws_access_key_id='YOUR_KEY',
    aws_secret_access_key='YOUR_SECRET',
    region_name='ap-south-1'
)

BUCKET_NAME = "your-bucket-name"

def get_upload_url(request):
    file_name = str(uuid.uuid4())
    file_type = request.GET.get("file_type")

    presigned_url = s3.generate_presigned_url(
        'put_object',
        Params={
            'Bucket': BUCKET_NAME,
            'Key': file_name,
            'ContentType': file_type
        },
        ExpiresIn=300
    )

    return JsonResponse({
        "upload_url": presigned_url,
        "file_url": f"https://{BUCKET_NAME}.s3.amazonaws.com/{file_name}"
    })

2️⃣ Frontend: Upload Directly to Storage

JavaScript Example:

async function uploadFile(file) {
  const res = await fetch(`/api/get-upload-url?file_type=${file.type}`);
  const data = await res.json();

  await fetch(data.upload_url, {
    method: "PUT",
    headers: {
      "Content-Type": file.type,
    },
    body: file,
  });

  console.log("Uploaded:", data.file_url);
}

😌 What Changed After That

Honestly… everything.

  • Uploads became faster

  • No more timeouts

  • Server load dropped massively

  • Users stopped complaining

And the best part?

My backend was finally doing what it should—handling logic, not carrying files around like a courier.


📦 But Wait… How Big Can Files Be?

This was my next question.

Here’s what I learned:

  • You can upload up to 200mb or more in a single request

  • For larger files, use multipart upload

That’s when I realized—this system is built for scale.


🧩 A Few Lessons I Learned the Hard Way

If you’re building something similar, here’s what I wish I knew earlier:

1. Don’t Trust the Client

Always validate:

  • File type

  • File size

Even if uploads go directly to storage.


2. Use Expiring URLs

Pre-signed URLs should expire quickly (5–15 mins).
Otherwise, anyone can misuse them.


3. Use Multipart Upload for Large Files

Anything above ~100MB?
Don’t risk it—split it into parts.


4. Your Backend Should Stay Lightweight

If your API is handling file uploads directly, you're setting yourself up for scaling issues.


🎯 Final Thought

That one bug—those failed uploads—completely changed how I design systems.

Now, whenever I think about file uploads, I ask myself:

“Does my backend really need to be involved?”

Most of the time, the answer is no.

And that simple shift?
It makes all the difference.

also let if u have structuring for any thing let not be  a fixed term make it sure that it can handle value * 1000 or even more.

Post a Comment

0 Comments