Where Bare Metal Makes Sense

The cloud vs. bare metal debate usually gets framed as old vs. new. That framing is wrong.

Bare metal isn't legacy infrastructure. It's the right choice for specific workloads—and the wrong choice for others. The difference comes down to physics and economics.

The Storage Latency Hierarchy

Not all storage is equal. Here's reality:

Storage Type Typical Latency Notes
Local NVMe 0.01–0.1ms Direct PCIe, no network stack
Local SATA SSD 0.1–0.5ms SATA controller overhead
Local HDD 10–15ms Mechanical seek time
FC SAN 0.2–1ms Dedicated fabric, SCSI overhead
iSCSI 0.5–2ms TCP/IP + SCSI overhead
NFS 1–5ms File protocol overhead
CEPH (HDD) 5–20ms Distributed consensus + replication
CEPH (NVMe) 0.5–3ms Still ~10× slower than local NVMe

Sources: Tom's Hardware, Red Hat Ceph Documentation, TechTarget

Local storage beats networked storage. Always. The physics are simple: fewer hops, less protocol overhead, lower latency.

The question isn't whether local is faster. It's whether you need that speed—and whether you can accept the trade-offs.

When Bare Metal Wins

1. Large Storage Volumes at Lower Cost

Networked storage has overhead: replication (CEPH typically 3×), controller redundancy, network infrastructure. You pay for that overhead in cost per TB.

Local storage on bare metal skips all of it. 12× 20TB drives in a server gives you 240TB raw capacity. RAID protects against disk failures. No SAN licensing.

Think video production, backup repositories (Veeam, restic, Borg), data lakes, and archive storage with local access needs.

2. Ultra-Low Latency Requirements

Local NVMe delivers sub-millisecond latency. Nothing networked comes close—not FC SAN, not NVMe-oF, not cloud block storage.

When microseconds matter, bare metal with NVMe is the only option. High-throughput databases (PostgreSQL, MySQL, Oracle), in-memory databases with disk spillover (Redis, KeyDB), real-time analytics (ClickHouse, TimescaleDB), message queues like Kafka and Redpanda.

3. High Memory + Local Storage

Some workloads need both: terabytes of RAM and fast local storage. VMs on shared infrastructure can't deliver this—memory contention, noisy neighbors, hypervisor overhead.

Bare metal with 2–4TB RAM and NVMe storage removes those constraints. Typical use cases: large in-memory databases, graph databases (Neo4j, TigerGraph), genome sequencing, financial risk modeling.

When Cloud/Virtualization Wins

Bare metal isn't always the answer. For many workloads, managed infrastructure is better.

1. Standard Compute

Web applications, APIs, microservices—these don't need dedicated hardware. They need reliability, scaling, and someone else handling redundancy.

Managed VMs or Kubernetes clusters handle this well. You deploy code. Infrastructure failures are someone else's problem.

2. GPU Access Without Hardware Management

For AI inference, rendering, or ML experimentation, GPU instances in a managed environment often make more sense. You get GPU access without managing drivers, CUDA versions, or RMA processes.

3. When You Don't Want to Manage Hardware

Bare metal means you handle:

  • Disk failures and coordinating replacements
  • RAID rebuilds
  • Firmware updates
  • Capacity planning

With managed infrastructure, those are the provider's problems.

4. When Availability Beats Performance

Distributed storage replicates data across nodes. Disk fails? System keeps running. Node fails? Data is still there.

Local storage on bare metal? Disk fails, you restore from backup. Server fails, you restore from backup. If you have a backup.

The Trade-Off: Redundancy

This is the core decision:

Bare Metal Distributed Storage
Latency Lowest Higher
Cost per TB Lower Higher
Redundancy Your responsibility Built-in
Availability Single point of failure Multi-node failover
Complexity Lower (single system) Higher (distributed)

Distributed systems provide:

  • CEPH: 3× replication across nodes
  • NetApp: Multiple controllers, automatic failover

Bare metal provides:

  • Local RAID (disk failure protection only)
  • Your backup strategy
  • Your recovery procedures

Bottom Line

If your workload tolerates a restore window, bare metal saves money. If it doesn't, you need distributed storage or a more complex HA architecture.

Decision Framework

Bare Metal Makes Sense When:

  • Storage volume exceeds 100TB
  • Latency requirements are sub-millisecond
  • Memory requirements exceed 1TB
  • You have solid backup and recovery processes
  • Cost per TB is a primary driver

Managed Cloud Makes Sense When:

  • Workload is standard compute (web, API, containers)
  • Built-in redundancy matters more than raw performance
  • Scaling flexibility is important
  • You'd rather pay more per TB than manage hardware

The Real Answer: Hybrid

Most production environments don't fit neatly into "bare metal" or "cloud." The realistic answer is hybrid—using each approach where it makes sense.

Your PostgreSQL cluster handling 50,000 transactions per second needs local NVMe. Your web frontends serving those queries don't. Running both on the same infrastructure type is leaving money on the table or accepting unnecessary latency.

A Typical Hybrid Stack

Here's what we see working well for Mittelstand companies with serious workloads:

  • Dedicated firewall — OPNsense or Cisco ASAv on hardware you control. No shared security appliances, no multi-tenant routing rules.
  • Dedicated loadbalancers — HAProxy on bare metal. Predictable latency, no cloud provider rate limits on connections.
  • Bare metal databases — PostgreSQL, MySQL, or your OLAP system on NVMe. Sub-millisecond queries. No noisy neighbors during peak load.
  • Kubernetes clusters — Your applications and web frontends on managed clusters. Scale horizontally. Let someone else handle node failures.

All connected through your private network. VLANs, private interconnects, no traffic hairpinning through public internet.

Storage: Pick Your Trade-off

Backup and archive storage follows the same logic. Two options, each with clear trade-offs:

Bare metal storage servers for backup repositories if you don't need infinite scale and don't mind managing hardware. A 12-bay server with 20TB drives gives you 240TB raw capacity. Veeam, restic, Borg—all work well. You handle disk replacements and capacity planning, but cost per TB is unbeatable.

S3-compatible object storage (ZERO-Z3) if you want someone else to handle availability. Replicated across nodes, accessible via standard S3 API. Higher cost per TB, but no hardware management and built-in redundancy.

Many setups use both: bare metal for primary backup (fast restores), S3 for offsite copies (geographic redundancy).

Why This Works

The database needs consistent I/O performance. That's physics—local NVMe wins. The web layer needs high CPU and mid to high memory, but not ultra-low latency storage. That's where managed Kubernetes wins.

Trying to run your high-transaction database on Kubernetes persistent volumes adds 5–10ms latency per query. Running your web frontends on bare metal wastes money and adds operational overhead for no performance gain.

Hybrid gives you both: raw performance where it matters, operational simplicity where it doesn't.

The Key Requirement

Hybrid only works if everything talks to everything else over private infrastructure. Your bare metal database needs a fast, secure path to your Kubernetes pods. Public internet round-trips kill the latency advantage.

What We Offer

We build hybrid stacks. Not just "here's a server, good luck"—actual integrated infrastructure.

Bare metal foundation:

  • Dedicated firewalls — OPNsense or Cisco ASAv on hardware you control
  • Dedicated loadbalancers — HAProxy on bare metal for predictable performance
  • Database servers — NVMe storage for PostgreSQL, MySQL, ClickHouse, or your OLAP system
  • Storage Servers — 64–576TB, Supermicro hardware, SATA or NVMe
  • High-Memory Servers — up to 4TB DDR4 ECC
  • GPU Servers — dedicated NVIDIA A2000/A6000

Managed application layer:

The glue:

  • Private VLANs connecting all components
  • Direct interconnects between bare metal and managed clusters

All on AS215197. Your bare metal database talks to your Kubernetes pods over private infrastructure, not public internet. That's how you get the best of both worlds.

Need Help Deciding?

Describe your workload. We'll tell you which approach fits—and quote both if it's close.