← Back

GROMACS on cloud GPUs (RTX 4090): quickstart and a self‑benchmark kit

Run a reproducible GROMACS benchmark on an RTX 4090 using Compute’s GPU-optimized image. This guide shows how to verify GPU access, install GROMACS correctly, and run a basic GPU-offloaded benchmark without guesswork.

This article is not a troubleshooting playbook and not a promise of one-click setup. It is a practical, reproducible path that matches how Compute instances actually work today.

Before you start (read this)

This guide assumes:

  • You are using Compute’s GPU-optimized image
  • You have user-level access only (no full VM, no permanent system changes)
  • You are comfortable running commands in a Linux shell

Compute instances are containerized environments. You should not assume that “apt-get anything I want” or “build once and forget” will hold across restarts. If you need a fixed environment long-term, create a custom template.

Step 1: Launch an RTX 4090 instance

  1. Go to Compute → Instances
  2. Click Launch new instance
  3. Choose your GPU size (RTX 4090)
  4. Add your SSH key
  5. When you reach Pick a template, select:

GPU-optimized image

  • Ubuntu 24.04
  • CUDA 12.6
  • JupyterLab 4.4.2
  • PyTorch 2.8.0
  • Vulkan SDK

Do not assume GROMACS is preinstalled. It is not.

Launch the instance.

Step 2: Connect and verify GPU access

SSH into the instance using the command shown in the UI.

First check that the GPU is visible:

nvidia-smi

You should see the RTX 4090 listed.

If nvidia-smi fails or shows no GPU, stop here. Terminate the instance and retry. If it happens again, this is a platform issue and should go to Support.

Step 3: Decide how you will run GROMACS

You have two supported paths. Pick one and stick to it.

Option A (recommended): Use the official GROMACS container

This avoids CUDA, compiler, and build mismatches.

Check that Docker or a compatible container runtime is available:

docker --version

Then run:

docker run --rm --gpus all gromacs/gromacs:2024.1 gmx --version

If this works and shows a GPU-enabled build, you are ready to run jobs using the container.

This is the safest option on Compute today.

Option B: Build GROMACS inside the instance

Only do this if you know you need a custom build.

At a high level, this means:

  • Installing build dependencies
  • Configuring GROMACS with CUDA support
  • Compiling with CMake

Follow the official GROMACS installation guide and make sure CUDA support is enabled. Do not mix instructions from blogs or older guides.

Be aware: changes made this way are not guaranteed to persist across instance lifecycles unless you convert the result into a custom template.

Step 4: Prepare your test system

Create a working directory:

mkdir -p ~/gromacs
cd ~/gromacs

You need a .tpr file to run mdrun.

If you already have one, copy it here.

If not, generate it from existing inputs:

gmx grompp -f md.mdp -c conf.gro -p topol.top -o system.tpr

Step 5: Run a GPU-offloaded benchmark

Run GROMACS with explicit GPU flags:

gmx mdrun -s system.tpr -deffnm bench \
 -nb gpu -pme gpu -update gpu -pin on

While it runs, confirm GPU activity in another shell:

nvidia-smi

You should see non-zero utilization.

When the run finishes, note the reported performance (ns/day).

What success looks like

  • gmx --version reports GPU support
  • nvidia-smi shows activity during the run
  • Performance stabilizes after warm-up
  • No CUDA or runtime errors appear in the log

If those conditions are not met, this is not a valid benchmark.

Common failure modes (and what they mean)

“GROMACS not found”
You selected the GPU-optimized image. GROMACS is not preinstalled. Use the container or install it explicitly.

CUDA or GPU errors at runtime
You are mixing incompatible CUDA versions, or the container does not have GPU access. Verify with nvidia-smi and gmx --version.

Inconsistent performance between runs
You are changing instance size, CPU allocation, or container versions. Benchmarks are only meaningful if the environment is stable.

About performance numbers

Performance depends on:

  • GROMACS version
  • CUDA version
  • CPU threads used
  • System size and PME settings

Numbers from this article are illustrative only. Always benchmark your own workload.

When not to use this guide

  • You are looking for Support diagnostics
  • You need multi-node MPI scaling
  • You want a turnkey, persistent GROMACS environment

In those cases, this article will frustrate you. Create a custom template or contact Support instead.

Try Compute today

Start a GPU instance with a CUDA-ready template (e.g., Ubuntu 24.04 LTS / CUDA 12.6) or your own GROMACS image. Enjoy flexible per-second billing with custom templates and the ability to start, stop, and resume your sessions at any time. Unsure about FP64 requirements? Contact support to help you select the ideal hardware profile for your computational needs.

← Back