← Back

4090 at €0.20/hr and 5090 at €0.40/hr. No bidding, no gimmicks.

We cut our on-demand 4090/5090 prices below the usual market without queues, bids, or surprises.

GPU pricing is a mess. Marketplace bids move by the hour. “Batch” tiers are cheap until they preempt your job. Some providers list a nice GPU price, then add CPU/RAM on top. You shouldn’t need a spreadsheet to run a model.

Today we’re posting simple numbers you can plan around: €0.20/hr for RTX 4090 and €0.40/hr for RTX 5090. That’s on-demand. Fixed. No tricks.

What we’re comparing against

By “high-quality,” we mean:

  • On-demand or persistent usage, not batch/interruptible by default.
  • Full, dedicated VRAM (4090: 24 GB; 5090: 32 GB).
  • Public, book-now pricing with no bidding games.
  • Transparent billing.
  • A provider you can reach when things go sideways.

The receipts (collected Oct 27, 2025)

All competitor prices below are public pages you can check. I standardize to USD for apples-to-apples; €1 = $1.1646 today, so €0.20 ≈ $0.233 and €0.40 ≈ $0.466.

Provider GPU Public rate (USD/hr) Notes
Hivenet (Compute) RTX 4090 (24 GB) $0.233 €0.20 / hr, fixed on-demand
Hivenet (Compute) RTX 5090 (32 GB) $0.466 €0.40 / hr, fixed on-demand
Runpod RTX 4090 $0.59 / hr Model page “on-demand.” Docs say no ingress/egress fees.
Runpod RTX 5090 $0.89 / hr Model page “on-demand.”
Vast.ai (marketplace) RTX 4090 ~$0.29 /hr (P25 “typical”) Live range $0.10–$2.13 /hr. Prices vary by listing.
Vast.ai (marketplace) RTX 5090 ~$0.37 /hr (P25 “typical”) Live range $0.16–$2.53 /hr. Prices vary by listing.
TensorDock RTX 4090 from $0.37 /hr (on-demand); spot ≈ $0.20 /hr On-demand “from” rate; spot lower if available.
TensorDock RTX 5090 from ~$0.55 /hr Shown on dashboard; availability varies.
SaladCloud RTX 4090 $0.16 – $0.204 /hr Batch priority; CPU/RAM billed separately.
SaladCloud RTX 5090 $0.25 – $0.294 /hr Batch priority; component billing.

Read this table like a buyer, not a marketer. If you tolerate batch or preemption, you’ll sometimes find a lower sticker on Salad or in the very lowest Vast bids. If you want predictable on-demand 4090/5090 at a rate finance can live with, our numbers are hard to beat.

What we can honestly claim

  • We undercut Runpod’s public on-demand pages for both 4090 and 5090 as of Oct 27, 2025.
  • We price below Vast’s “typical” (P25) for 4090 and land in the low band of 5090 without bidding.
  • We don’t beat Salad’s batch-priority stickers on raw $/hr. Those are a different class of service and often bill CPU/RAM separately.

Why these prices are sustainable

Hivenet isn’t a traditional data center. We run a distributed cloud on real, underused devices. That changes the cost base and utilization story. It lets us post fixed, boring prices that don’t collapse the moment demand spikes. We’d rather be predictably cheap than occasionally the cheapest.

Value you can explain to a CFO

  • €/GB-VRAM-hour matters for inference.
    • 4090 at €0.20 → €0.0083 per GB-hr (24 GB).
    • 5090 at €0.40 → €0.0125 per GB-hr (32 GB).
      Simple math, easy budgeting.
  • Fewer surprises. Fixed price, on-demand, EU-centric posture. You won’t wake up to a preempted job or a mystery bid.

Frequently asked pushbacks

“Vast shows $0.16–$0.29. Why not match that?”

Those are marketplace listings. You can absolutely scoop great deals. You can also hit churn, clock caps, and time lost to hunting. Our offer is for teams that want to book and run.

“Salad lists $0.16 for 4090 and $0.25 for 5090.”

Salad is batch-first and uses component billing (GPU + CPU + RAM). If that model fits your workload and you’re happy with batch queues, it’s a strong option. We’re pricing for predictable, on-demand runs.

Bottom line

If you just want the absolute lowest sticker, you’ll always find one somewhere.
If you want on-demand 4090s and 5090s that don’t vanish or spike in cost, this is probably the most affordable way to do it today.
That’s the balance we’re here for: reliable performance at a fair, steady price—built on a cloud that doesn’t burn extra watts or sell your data to make the math work.

← Back