Skip to main content

GROMACS on Sol

GROMACS is available on Sol as a GPU-accelerated, MPI-parallel molecular dynamics package built for NVIDIA A100 GPUs and multi-node production runs.

Available Versions

Use module spider gromacs to see all available GROMACS modules. Builds with CHARMM force field support carry the -charm suffix.

module spider gromacs
ModuleGROMACSForce Fields
gromacs/2025.4-gpu-mpi-charm2025.4CHARMM36 (multiple versions)

For CHARMM36-specific guidance including force field selection and MDP settings, see the GROMACS with CHARMM36 page.

Loading the Module

module load gromacs/<version>

For example:

module load gromacs/2025.4-gpu-mpi-charm

This automatically loads dependencies (GCC 12.1, CUDA 12.6, OpenMPI 4.1.5, FFTW 3.3.10) and sources the GROMACS environment (GMXRC), setting GMXDATA, GMXBIN, and related variables.

Available Binaries

BinaryPurpose
gmxSerial pre- and post-processing — pdb2gmx, editconf, solvate, genion, grompp, analysis
gmx_mpiMPI-parallel mdrun for production simulations

Do not use gmx_mpi for setup and analysis steps — it is built mdrun-only and does not have those subcommands. Use gmx for everything except running the simulation.

Module Aliases

AliasWhat it does
listcharmmLists all installed CHARMM force fields in $GMXDATA/top/

Typical MD Workflow

1. Prepare the Topology

gmx pdb2gmx \
-f protein.pdb \
-o protein_processed.gro \
-water tip3p

Omit -ff to choose a force field interactively, or pass it explicitly:

gmx pdb2gmx -f protein.pdb -o protein_processed.gro -water tip3p -ff amber99sb-ildn

2. Define the Simulation Box

gmx editconf \
-f protein_processed.gro \
-o protein_box.gro \
-c \
-d 1.2 \
-bt dodecahedron

3. Solvate

gmx solvate \
-cp protein_box.gro \
-cs spc216.gro \
-o protein_solv.gro \
-p topol.top

4. Add Ions

gmx grompp \
-f ions.mdp \
-c protein_solv.gro \
-p topol.top \
-o ions.tpr

gmx genion \
-s ions.tpr \
-o protein_solv_ions.gro \
-p topol.top \
-pname NA \
-nname CL \
-neutral

5. Energy Minimization

gmx grompp \
-f minim.mdp \
-c protein_solv_ions.gro \
-p topol.top \
-o em.tpr

gmx mdrun -v -deffnm em

6. Equilibration and Production MD

Prepare your production .tpr, then submit via SLURM (see below):

gmx grompp \
-f md.mdp \
-c npt.gro \
-t npt.cpt \
-p topol.top \
-o md.tpr

Example SLURM Job Scripts

Single Node

#!/bin/bash
#SBATCH --job-name=gromacs_md
#SBATCH --partition=public
#SBATCH --qos=public
#SBATCH --nodes=1
#SBATCH --ntasks=16
#SBATCH --cpus-per-task=1
#SBATCH --gres=gpu:a100:1
#SBATCH --mem=64G
#SBATCH --time=24:00:00
#SBATCH --output=%x_%j.out
#SBATCH --error=%x_%j.err

module purge
module load gromacs/2025.4-gpu-mpi-charm

mpirun -np $SLURM_NTASKS gmx_mpi mdrun \
-s md.tpr \
-deffnm md \
-ntomp $SLURM_CPUS_PER_TASK \
-nb gpu \
-pme gpu \
-bonded gpu \
-update gpu

Multi-Node

#!/bin/bash
#SBATCH --job-name=gromacs_md_multinode
#SBATCH --partition=public
#SBATCH --qos=public
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=16
#SBATCH --cpus-per-task=1
#SBATCH --gres=gpu:a100:1
#SBATCH --mem=64G
#SBATCH --time=48:00:00
#SBATCH --output=%x_%j.out
#SBATCH --error=%x_%j.err

module purge
module load gromacs/2025.4-gpu-mpi-charm

mpirun -np $SLURM_NTASKS gmx_mpi mdrun \
-s md.tpr \
-deffnm md \
-ntomp $SLURM_CPUS_PER_TASK \
-nb gpu \
-pme gpu \
-bonded gpu

Users with access to private GPU nodes should substitute their partition and QOS values.


Useful Post-Run Commands

# Check energy conservation
gmx energy -f md.edr -o energy.xvg

# Remove periodic boundary conditions and center the protein
gmx trjconv -s md.tpr -f md.xtc -o md_nojump.xtc -pbc nojump -center

# Compute RMSD
gmx rms -s md.tpr -f md_nojump.xtc -o rmsd.xvg

# Compute RMSF per residue
gmx rmsf -s md.tpr -f md_nojump.xtc -o rmsf.xvg -res

Additional Resources