Moving 80 TB from one S3 provider to another. Sounds like a weekend job with rclone. In practice, it often takes two months. Not because of the data volume—because of the path.
The triangle nobody sees
S3 migration has three points, not two. Data sits at the old provider. Target is the new one. But where does the migration tool run? On your laptop? On a VM at Hetzner? At AWS?
That's the problem. The data flow isn't an arrow from A to B. It's a triangle:
Old Provider → Migration Host → New Provider
This detour costs. In time and in money.
The expensive detour
Egress fees apply when data leaves a datacenter. With migration via a third host, you pay twice: once from the old provider to your server, and again from your server to the destination—if that server also sits at a cloud provider.
With 80 TB and the usual $90/TB egress at AWS or Azure, that's $7,200 just to get the data to your migration server. If that server sits at another cloud provider, you pay again.
ZERO-Z3 charges €5/TB egress and nothing for ingress. But the path to us can still get expensive if it goes through the wrong intermediate stops.
Small files, big problem
With large files, bandwidth dominates. 1 GB at 1 Gbps is 8 seconds in theory—in practice closer to 12–20 seconds due to TCP slow start, protocol overhead, and shared links. Still manageable.
Small files are a different beast. A 10 KB file transfers in milliseconds. But before a single byte flows, the protocol has to do its work: DNS lookup, TCP handshake, TLS handshake, HTTP request. Per object: 2–4 roundtrips. At 50ms latency between migration host and target, that's 100–200ms—just for talking, before the actual transfer begins.
A real-world example: 80 TB in 3.2 billion objects. Average 25 KB, many much smaller. Backup chunks, metadata, logs. A large-scale S3 migration.
The maths at 50ms latency and 3 roundtrips per object:
3.2B × 150ms = 480M seconds = 15 years
With 1,000 parallel streams:
15 years / 1,000 = 5.5 days
That's the optimistic case—perfect parallelisation, no bottlenecks, no failures. In practice, we see 50–100 GB per hour for such migrations. At 80 TB, that's 66 days.
rclone isn't the problem
For S3 migrations, rclone is first choice. With many small files, it beats MinIO client and AWS CLI by 6–9x. The right flags make a difference:
rclone copy source:bucket dest:bucket \
--transfers 32 \
--checkers 16 \
--fast-list \
--s3-upload-concurrency 4 \
--s3-chunk-size 64M
But even the best tool can't break physics. A fast tool on a poorly-connected host stays slow. The migration host's location decides, not the software.
Self-service migration and its limits
Some providers have built-in migration tools. Cloudflare R2 calls theirs "Super Slurper", Backblaze B2 offers migration support, AWS has DataSync. These tools pull directly from the source bucket—no triangle, no double egress.
Sounds good. The catch: file size limits, limited parallelism, and nobody watching when something goes wrong. R2 caps at 1 TB per object and only runs 3 migrations at once. For a few terabytes of normal-sized files, these work fine.
For 80 TB spread across billions of objects, they don't. Big migrations need someone watching. Retrying failed objects, handling rate limits, tuning chunk sizes for the actual file distribution. That's not a self-service job.
How we solve it
The simplest solution: put the migration host where the target is. When rclone runs in the same datacenter as ZERO-Z3, half the latency disappears. No egress for the second hop. Sub-milliseconds instead of 50ms.
We offer two variants.
You do it yourself, but from our network. We provision a VM in our datacenter. 10–25 Gbps connectivity, direct access to ZERO-Z3, no additional egress. You still pay egress from the old provider—there's no way around that, the data has to leave somehow. But you only pay once.
We do it for you. You give us read-only credentials for the source bucket. We analyse the data structure, find the optimal settings, run the migration, and validate that everything arrived. You can watch the progress. We deal with the problems—and with billions of objects, there are always some.
Costs
Self-migration:
| VM (8 vCPU, 16 GB RAM, 25 Gbps) | ~€50/week |
| ZERO-Z3 ingress | €0 |
| ZERO-Z3 storage | pay as you go |
Managed migration:
| Up to 10 TB | €250 flat |
| Above 10 TB | €25/TB |
| Over 100 TB | on request |
10 TB costs €250. 50 TB costs €1,250. The flat rate covers setup overhead—analysis, configuration, validation. Beyond that, it scales linearly.
The biggest cost remains egress from the old provider. We can't change that. But we prevent you paying twice.
Who this is for
More than a terabyte. Millions of small files—backup chunks, logs, metadata. Anyone who wants it done in days, not months. Anyone who doesn't fancy spending a week tuning rclone flags only to discover the migration host is in the wrong place.
Drop us an email: how much data, roughly how many objects, where the data currently sits. We'll tell you what to expect.