← Back

COMSOL 6.3 GPU support explained: what works, what doesn’t

COMSOL 6.3 added real GPU acceleration, but only for specific paths. If you know the limits, it’s useful. If you don’t, you’ll chase errors. Here’s a short, honest guide.

Where GPUs actually help in 6.3

  • Time‑dependent simulations using the discontinuous Galerkin (dG) method
    ✔ Pressure Acoustics, Time Explicit is supported and can be much faster on a compatible NVIDIA GPU.
  • Surrogate model training (COMSOL’s DNN component)
    ✔ GPU‑accelerated training is supported when the CUDA DNN component is installed.

Not covered: most other physics interfaces and time‑implicit solvers. Elastic waves, general FEM with continuous elements, and many multiphysics combinations are not GPU‑accelerated in 6.3.

Start in seconds with the fastest, most affordable cloud GPU clusters.

Launch an instance in under a minute. Enjoy flexible pricing, powerful hardware, and 24/7 support. Scale as you grow—no long-term commitment needed.

Try Compute now

What you need

  • A CUDA‑ready template (e.g., Ubuntu 24.04 LTS / CUDA 12.6). The container brings the CUDA user‑space; your host supplies the host driver.
  • A valid COMSOL license. Set LMCOMSOL_LICENSE_FILE=<port>@<server> in the template’s Environment → Variables.
  • The CUDA Toolkit path known to COMSOL. You set this during install or later in Preferences → Computing → GPU Acceleration.

Tip: The pre‑made Ubuntu/PyTorch templates already include a recent CUDA runtime. You still point COMSOL to a CUDA toolkit path it recognizes.

Install or mount COMSOL cleanly

We don’t redistribute COMSOL. Bring it yourself.

  • Install inside the template: mount the installer and run it in the container. During install, include the CUDA dG support and (if you need it) the CUDA DNN component for surrogate training.
  • Mount from a shared volume: if your org keeps COMSOL on a network share, mount it read‑only and point the container to it.

Keep license files and installers out of public images. Mount them at runtime.

Enable GPU acceleration in your model (dG time‑explicit)

  1. In the Model Builder, make sure the physics uses a dG‑based time‑explicit interface (for example, Pressure Acoustics, Time Explicit).
  2. Under Study → Solver Configurations → Time‑Dependent, add Hardware Acceleration.
  3. In Preferences → Computing → GPU Acceleration, confirm the CUDA Toolkit path and that your NVIDIA GPU is detected.
  4. Save the model.

Run it

  • GUI: Watch the log for messages that GPU kernels are active.
  • Batch: run from shell inside the container:
  • comsol batch -inputfile model.mph -study std1 -outputfile out.mph
  • (GPU settings are saved in the model. Use the same study name you configured.)

Verify

  • The log should explicitly state that GPU acceleration is enabled for the time‑dependent solver.
  • nvidia-smi shows utilization and VRAM use during the run.

Self‑check: is my model eligible?

GPU acceleration in 6.3 will refuse to run unless all DOFs in the time‑dependent solver are dG and the interface supports the GPU path. If you see “GPU calculations disabled” or a similar warning:

  • Confirm you’re using Pressure Acoustics, Time Explicit (or another documented dG time‑explicit interface) — not a time‑implicit or continuous FEM interface.
  • Check that any added physics or couplings don’t introduce non‑dG DOFs into the same time‑dependent solver.
  • Verify the CUDA path in Preferences and that the GPU is visible.

VRAM, precision, and practical limits

  • Precision: the GPU path runs in single precision. If your study demands strict FP64 accuracy, stay on CPU or validate mixed‑precision acceptability on a small window.
  • VRAM: monitor nvidia-smi. If you hit OOM, coarsen the mesh (within validation), trim outputs, or use a larger‑VRAM GPU.
  • I/O: time‑domain acoustics can write a lot. Reduce output frequency and write compressed results when possible.

Quick validation + self‑benchmark

  • Pick a representative model with the GPU‑eligible interface.
  • Run a short CPU baseline and the GPU run with identical settings.
  • Compare waveforms/field values at key probes and residual behavior.
  • Record wall time and compute cost per converged study:

cost_per_study = price_per_hour × wall_hours

Keep a Methods note with COMSOL version, study name, physics, CUDA path, GPU model/VRAM, and whether GPU acceleration was enabled.

Troubleshooting

“Hardware Acceleration node not available”
You’re not under a Time‑Dependent solver or your interface isn’t supported. Switch to a dG time‑explicit interface.

“GPU calculations disabled. Not all DOFs are dG.”
One or more physics adds continuous DOFs into the solver. Remove or separate them, or run that study on CPU.

“CUDA toolkit not found / GPU not detected.”
Set the toolkit path in Preferences. Confirm nvidia-smi inside the container and that your template is CUDA‑ready.

License errors
Set LMCOMSOL_LICENSE_FILE=<port>@<server> or use a tunnel (@localhost on the forwarded port). See our licensing guide.

Methods snippet (copy‑paste)

hardware:
 gpu: "<model> (<VRAM> GB)"
 driver: "<NVIDIA driver>"
 cuda_toolkit: "<path or version>"
software:
 comsol: "6.3 (GPU acceleration enabled)"
 image: "Ubuntu 24.04 LTS (CUDA 12.6)"
licenses:
 LMCOMSOL_LICENSE_FILE: "27000@licenses.my-org.edu"
model:
 physics: "Pressure Acoustics, Time Explicit (dG)"
 study: "std1 (Time‑Dependent)"
run:
 mode: "GUI | batch"
 batch_cmd: "comsol batch -inputfile model.mph -study std1 -outputfile out.mph"
outputs:
 wall_hours: "<hh:mm>"
 probe_checks: "<metrics>"
 notes: "All DOFs dG; GPU path active"

Related reading

Try Compute today

Start a GPU instance with a CUDA-ready template (e.g., Ubuntu 24.04 LTS / CUDA 12.6) or your own GROMACS image. Enjoy flexible per-second billing with custom templates and the ability to start, stop, and resume your sessions at any time. Unsure about FP64 requirements? Contact support to help you select the ideal hardware profile for your computational needs.

← Back