GPUs are the headline, but a lot of real work happens before and after the GPU step. Downloads, preprocessing, packaging, orchestration, API glue, and “make it run reliably” are often CPU jobs. Paying GPU prices while doing CPU work is one of the easiest ways to burn credits for no benefit.
On Compute with Hivenet, you can launch a virtual machine (VM) without a GPU. You get the same “real Linux server” shape, just powered by vCPUs instead of GPUs. .
If you’re still deciding whether you want a VM at all, start here: [[Interlink: VM or container: how to choose in 60 seconds]]. If your question is “Do I need a GPU VM?” this one helps: GPU virtual machine: what it is and who actually needs one.
What a vCPU virtual machine is, in plain English
A vCPU VM is a full Linux machine without attached GPUs. You pick an OS, connect over SSH, install packages with sudo, run background services, and keep a system environment that behaves like a normal server.
That “full OS control” part is the reason to use a VM, even for CPU-only workloads. If you don’t need OS control, a container instance is often the simpler choice.
When a vCPU VM is the right tool
Use a vCPU VM when you need a server-shaped environment, but your workload doesn’t benefit from GPU acceleration.
Common examples:
Data prep and ETL that feeds GPU work later.
Downloading datasets, converting formats, extracting archives, cleaning text, resizing images, chunking documents, and building training/inference inputs are often CPU-bound.
Running “support services” for your AI stack.
This includes lightweight APIs, job queues, schedulers, reverse proxies, and internal tooling that coordinates GPU jobs. If the GPU is idle while the service runs, keep the service on vCPU.
CPU inference for small or non-latency-critical workloads.
Some models and tasks run fine on CPU, especially when throughput and latency aren’t strict. If performance is acceptable on vCPU, don’t buy a GPU out of habit. If it’s not acceptable, switch. Simple.
Builds, packaging, and CI-like tasks.
Compiling dependencies, building wheels, packaging artifacts, building container images, running tests, or preparing deployable bundles are usually CPU work. If Docker is part of the workflow, a VM is often easier: Run Docker the normal way on a Compute VM.
A long-lived “workbench” machine.
If you want a consistent environment you can keep coming back to (toolchains, scripts, services), but you don’t need a GPU attached all the time, vCPU VMs are a practical baseline.
If any of this sounds like “I want a Linux server that I control,” a vCPU VM is a good fit.
When you should use a container instead
Use a container instance (even on vCPU) when your goal is “run this workload” and you don’t want to manage the OS.
Containers are usually the right choice for:
- Short-lived scripts and repeatable runs.
- A single service that doesn’t need system services or deep OS customization.
- Workloads that map cleanly to a template or a containerized setup.
If your container keeps blocking you, that’s the signal to move up to a VM: When it’s worth switching from a container instance to a VM.
When you should still pay for a GPU
Use a GPU when the workload actually benefits from it, especially for modern GPU-accelerated computing workloads. .
That typically includes:
- Training and fine-tuning.
- Large-model inference where CPU performance isn’t acceptable.
- Latency-sensitive inference services where response time matters.
If you’re unsure whether the GPU is worth it, this is the simple approach: run one small test on vCPU first. If performance is clearly not good enough, move to GPU with confidence.
A cost-friendly pattern that works well
Split your pipeline.
Put the CPU-heavy steps on vCPU (prep, orchestration, downloads, packaging). Spin up GPUs only for the GPU-heavy steps (training, heavy inference), including AI workloads common for SMBs. Shut the GPU instance down as soon as it’s done. .
This is the same logic as our pricing article, just applied in a practical way: Cloud GPU VM pricing: what you’re really paying for.
If you need Docker for your CPU stack, don’t bury instructions in a blog post. Use the docs tutorial: Install Docker on a Compute VM. For providers thinking about the supply side of this model, our interview with Hivenet's first certified GPU supplier shows how GPU capacity can shift from mining to AI.
Access and networking still matter
CPU vs GPU doesn’t change how you reach the machine or expose services.
If you’re testing a UI privately, SSH port forwarding is often the cleanest route. If you need a public link, use HTTPS. If you need direct client connections, use TCP or UDP.
Don’t get surprised by persistence
Stop/start is useful, but don’t treat a stopped instance as long-term storage.
Which OS should you pick?
If you don’t have a preference, Ubuntu is usually the least surprising option. If you care more about stability or newer tooling, Debian and Fedora can make sense.
Try Compute
If you’re paying for GPUs while doing CPU work, a vCPU VM is the easiest fix. Launch a small vCPU VM, run the CPU steps there, then bring up GPUs only when you actually need them.
