vCPU compute that fits the rest of your workflow

Run CPU workloads where they belong: next to your GPUs, inside the same Compute account, with pricing you can predict.

Use vCPUs for data prep, inference, testing, and internal services without burning GPU credits or juggling providers.

Create an account

CPU work is everywhere.
It shouldn’t be a hassle.

Most AI and data workloads spend more time on CPUs than people expect.

·

Cleaning datasets. Transforming files. Running inference. Testing pipelines. Powering internal tools.

·

The problem isn’t CPU compute. The problem is how fragmented it becomes.

·

Different providers. Data moving around. Bills that spike for reasons no one planned.

Compute vCPUs exist to remove that friction.

One place for your full pipeline

1.

With Compute, CPU and GPU workloads live side by side.

2.

Prepare your data on vCPUs. Train models on GPUs. Run inference on CPU when GPU power isn’t needed.

3.

Same account. Same credits. Same regions.

4.

No internal data hopping. No surprise transfer costs between stages. Just a workflow that stays intact from start to finish.

Common ways teams use vCPUs

Data preprocessing and ETL

Clean, transform, and prepare datasets before GPU training without wasting GPU time.

Inference and lightweight serving

Run NLP models, embeddings, and low‑latency services on CPU where it makes sense.

Development and testing

Build and test pipelines freely without watching GPU credits disappear.

Internal tools with data residency needs

Run dashboards, analytics, and business workloads close to where your data lives.

If the task doesn’t need a GPU, a vCPU is often the better choice.

Clear pricing, down to the hour

Compute vCPUs use simple hourly pricing.
No commitments. No reservations. No hidden layers.

Welcome bonus: up to €250 on first purchase

RAM
Disk Space
Bandwidth
Price
2 ×
RAM GB
Disk Space 50 GB
Bandwidth 250 Mb/s

0.035/h

4 ×
RAM GB
Disk Space 100 GB
Bandwidth 250 Mb/s

0.07/h

8 ×
RAM 16 GB
Disk Space 200 GB
Bandwidth 500 Mb/s

0.14/h

16 ×
RAM 32 GB
Disk Space 400 GB
Bandwidth 1000 Mb/s

0.28/h

32 ×
RAM 64 GB
Disk Space 800 GB
Bandwidth 1000 Mb/s

0.56/h

Each instance includes dedicated vCPUs, balanced RAM, NVMe SSD storage, and defined bandwidth.

You pay for what runs. You stop paying when it stops.

Run where your data needs to stay

Choose where your vCPUs run: EU, USA, or UAE.

Your CPU workloads stay in the region you select, alongside your other Compute resources. That matters for teams handling sensitive data, regulated workloads, or regional customers.

Nothing abstract. Just control.

Built for teams that need flexibility

Compute vCPUs work well for startups and SMBs because they don’t lock you in.

No long‑term contracts. No capacity planning games. No separate CPU billing system to reconcile.

You scale up when you need to. You scale down when you don’t.

FAQ

Questions people usually ask

Do I need a GPU to use vCPUs?

No. vCPUs can be used on their own for CPU-only workloads such as data processing, inference, and testing.

When does it make sense to choose CPU over GPU?

If your workload doesn’t benefit from GPU acceleration, using a vCPU is usually simpler and more cost-effective.

How does vCPU pricing work?

vCPUs are billed hourly with no commitments. You pay only for the time your instance runs.

Where does my data run?

You choose the region. vCPUs are available in the EU, USA, and UAE, and data stays in the selected region.

Start running CPU workloads the simple way

If you already use Compute, vCPUs fit right in. If you’re new, getting started takes minutes.

Create an account, then an instance, and run your workload.