Object Storage

S3 Client Performance: rclone vs MinIO vs AWS CLI vs Cyberduck

Which tool is fastest for your workload? We tested all four.

You've chosen your S3-compatible storage. Now you need a client. The four most common options: rclone, MinIO client (mc), AWS CLI, and Cyberduck (duck). Each has its fans. But which one actually performs best?

We ran benchmarks against ZERO-Z3 (our NVMe-backed Ceph storage in Frankfurt) to find out.

Test Setup

All tests run from a VM in our Frankfurt datacenter:

  • Client: 8 vCPU, 16 GB RAM, NVMe local storage
  • Network: 25 Gbps to ZERO-Z3 endpoint
  • Storage: ZERO-Z3 Standard (Ceph RGW, erasure coded)
  • Versions: rclone 1.71.0, mc RELEASE.2025-08-13T08-35-41Z, AWS CLI 1.29.62, duck 9.3.1

We tested four operations that matter in practice:

  1. Large file upload: 10 GB streamed from /dev/zero
  2. Many small files: Linux kernel 6.12 source tree (86,618 files, 1.6 GB)
  3. Listing: List all 86,618 objects
  4. Deletion: Delete all uploaded files

The Results

Large File Upload (10 GB)

Client Time Throughput
mc pipe 26.7s 374 MB/s
rclone rcat 35.7s 280 MB/s
aws s3 cp 38.0s 263 MB/s
duck --upload 44.1s 232 MB/s

MinIO client wins for large files - about 33% faster than rclone for single large uploads. Duck comes in last at 232 MB/s.

Linux Kernel Source (86,618 files, 1.6 GB)

Upload

Client Time Files/sec
rclone (--transfers 32) 39.4s 2,198
duck (--parallel 8) 119.0s 728
mc cp --recursive 260.5s 332
aws s3 cp --recursive 349.7s 248

rclone is 6.6x faster than mc, 8.9x faster than AWS CLI. Duck lands in second place - 3x slower than rclone but faster than mc and AWS CLI.

Note on parallelism: mc's --max-workers (default: autodetect) and AWS CLI's max_concurrent_requests (default: 10) were tested. Explicit tuning didn't improve mc's performance.

List

Client Time
rclone ls 1.9s
mc ls --recursive 7.9s
aws s3 ls --recursive 11.9s
duck --list N/A - no recursive listing

rclone is 4x faster than mc, 6x faster than AWS CLI for listing 86,618 objects. Duck only lists immediate directory contents - no recursive option available.

Delete

Client Time
mc rm --recursive --force 19.6s
rclone delete 33.5s
aws s3 rm --recursive 305.9s
duck --delete 445.7s

mc takes the lead for deletion - nearly 2x faster than rclone. AWS CLI and duck are significantly slower, taking 5+ and 7+ minutes respectively.

Feature Comparison

Performance is only part of the picture. Management capabilities vary between clients.

Feature rclone mc aws duck
File transfer
Bucket management
IAM user/policy management MinIO only
Bucket policies
Lifecycle rules
Versioning control
Replication setup MinIO only
Server-side encryption
Client-side encryption Cryptomator
Bandwidth limiting
Sync/mirror

Note: MinIO client's admin commands (mc admin) only work with MinIO server. However, mc ilm (lifecycle) and mc anonymous (bucket policies) work with any S3-compatible storage including AWS S3 and Ceph RGW. For IAM user management on non-MinIO storage, AWS CLI is required.

Which Client to Choose?

Use rclone if:

  • You're moving many small files
  • You need bandwidth limiting or scheduling
  • You want one tool for multiple cloud backends
  • You need encryption or compression in transit

Use MinIO client if:

  • You're already in the MinIO ecosystem
  • You need fast change detection for large buckets
  • You prefer a simpler command syntax

Use AWS CLI if:

  • You're scripting with other AWS services
  • You need IAM policy compatibility
  • Your team already knows it

Cyberduck CLI (duck):

  • Best for occasional manual transfers
  • Great GUI companion - same tool for desktop and terminal
  • Not optimised for bulk operations (no recursive listing, slower deletes)

Cyberduck is an excellent GUI client. The CLI is useful for quick ad-hoc transfers, but for automation, stick with rclone or mc.

Benchmark Commands

For reproducibility, here are the exact commands we used.

Setup

# Download Linux kernel source (86,618 files, 1.6 GB)
wget https://cdn.kernel.org/pub/linux/kernel/v6.x/linux-6.12.tar.xz
tar xf linux-6.12.tar.xz

Configuration

# rclone
rclone config create zero-z3 s3 \
  provider=Ceph \
  endpoint=https://fra.s3.zeroservices.eu \
  access_key_id=YOUR_ACCESS_KEY \
  secret_access_key=YOUR_SECRET_KEY \
  acl=private

# MinIO client
mc alias set zero-z3 https://fra.s3.zeroservices.eu YOUR_ACCESS_KEY YOUR_SECRET_KEY

# AWS CLI (~/.aws/credentials)
[zero-z3]
aws_access_key_id = YOUR_ACCESS_KEY
aws_secret_access_key = YOUR_SECRET_KEY

# AWS CLI (~/.aws/config)
[profile zero-z3]
region = zero-fra2
s3 =
    signature_version = s3v4

# Cyberduck CLI (no config file - credentials passed via flags)
duck --username YOUR_ACCESS_KEY --password 'YOUR_SECRET_KEY' \
  --list "s3://fra.s3.zeroservices.eu/mybucket/"

Large File Upload (10 GB)

# rclone (stdin via rcat)
time dd if=/dev/zero bs=1M count=10240 | rclone rcat zero-z3:mybucket/testfile_10gb.img

# MinIO client (stdin via pipe)
time dd if=/dev/zero bs=1M count=10240 | mc pipe zero-z3/mybucket/testfile_10gb.img

# AWS CLI
time dd if=/dev/zero bs=1M count=10240 | aws s3 cp - s3://mybucket/testfile_10gb.img \
  --profile zero-z3 --endpoint-url https://fra.s3.zeroservices.eu --quiet

# Cyberduck CLI - doesn't support stdin, requires file on disk
# dd if=/dev/zero of=testfile_10gb.img bs=1M count=10240
# time duck --username YOUR_ACCESS_KEY --password 'YOUR_SECRET_KEY' \
#   --upload "s3://fra.s3.zeroservices.eu/mybucket/" testfile_10gb.img

Many Small Files (Linux Kernel)

# rclone (32 parallel transfers)
time rclone copy linux-6.12/ zero-z3:mybucket/kernel/ --transfers 32 --checkers 16

# MinIO client
time mc cp --recursive linux-6.12/ zero-z3/mybucket/kernel/ --quiet

# AWS CLI
time aws s3 cp --recursive linux-6.12/ s3://mybucket/kernel/ \
  --profile zero-z3 --endpoint-url https://fra.s3.zeroservices.eu --quiet

# Cyberduck CLI
time duck --username YOUR_ACCESS_KEY --password 'YOUR_SECRET_KEY' \
  --upload "s3://fra.s3.zeroservices.eu/mybucket/kernel/" ./linux-6.12/ \
  --existing overwrite --parallel 8

List

# rclone
time rclone ls zero-z3:mybucket/kernel/ | wc -l

# MinIO client
time mc ls --recursive zero-z3/mybucket/kernel/ | wc -l

# AWS CLI
time aws s3 ls --recursive s3://mybucket/kernel/ --profile zero-z3 \
  --endpoint-url https://fra.s3.zeroservices.eu | wc -l

# Cyberduck CLI - no recursive listing support
# duck --list only shows immediate directory contents

Delete

# rclone
time rclone purge zero-z3:mybucket/kernel/

# MinIO client
time mc rm --recursive --force zero-z3/mybucket/kernel/

# AWS CLI
time aws s3 rm --recursive s3://mybucket/kernel/ --profile zero-z3 \
  --endpoint-url https://fra.s3.zeroservices.eu --quiet

# Cyberduck CLI
time duck --username YOUR_ACCESS_KEY --password 'YOUR_SECRET_KEY' \
  --delete "s3://fra.s3.zeroservices.eu/mybucket/kernel/"

The Bottom Line

There's no single winner. It depends on your workload:

  • Many small files: rclone (3-9x faster than alternatives)
  • Large files: mc (best single-stream throughput)
  • Bulk deletion: mc is fastest; duck and AWS CLI take significantly longer
  • Listing: rclone is 4-6x faster; duck doesn't support recursive listing

For mixed workloads, rclone is the safest choice. It handles many-files workloads well and remains competitive on large files.

Cyberduck shines as a GUI client. The CLI works well for occasional transfers, but for automation, rclone or mc are better suited.

For a broader comparison of S3-compatible storage providers, check out s3compare.io.

About ZERO-Z3

The client is only half the equation. Storage backend matters more. A fast client hitting a slow storage endpoint won't help you.

ZERO-Z3 is our S3-compatible object storage, running on Ceph with NVMe-backed OSDs in Frankfurt. No per-request fees, no vendor lock-in.

  • Location: Frankfurt, Germany (GDPR-compliant)
  • Backend: Ceph RGW, erasure coded, NVMe storage
  • Network: Own AS215197, direct peering with hyperscalers and hosting providers at DE-CIX
  • Pricing: STANDARD_IA from €7.50/TB, STANDARD (NVMe) from €60/TB, €5/TB egress
ZERO-Z3 Details Questions?