AMBER (torchani-amber)¶
Status: supported upstream with caveats.
AIMNet2 (and Nutmeg) are supported in AMBER through the torchani-amber integration from the Roitberg group. Despite the torchani- prefix the project explicitly handles AimNet2 and Nutmeg models in addition to ANI-family potentials. Reference paper: Bridging neural network potentials and classical biomolecular simulations.
Mechanism¶
torchani-amber is a compiled-in C++/Fortran integration, not a plugin. It hooks into AMBER's existing external-potential paths:
| Mode | AMBER mechanism | Use case |
|---|---|---|
| Full ML | iextpot = 1 + &extpot (extprog = "TORCHANI") |
All-atom ML simulation |
| ML/MM | ifqnt = 1 + &qmmm (qm_theory = 'EXTERN') |
QM/MM with the QM region treated by the NNP |
Both modes are pure mdin-file configuration -- no Python code at runtime.
Install¶
Not pip-installable. The integration must be built into AMBER from source:
- Clone
torchani-amber(git clone --recurse-submodules ...). - Build its bundled
torchani_sandboxplus the C++ extensions (run-cmake). - Build AmberTools 25/26 from source with
CMAKE_PREFIX_PATHpointing at thetorchani-amberinstall. AMBER auto-links it. - For ML/MM where the QM region has a non-zero net charge (i.e.
qm_charge != 0) with AimNet2 or Nutmeg, AmberTools 25 needs an additional patch (replaceamber_src/AmberTools/sander/qm2_extern_torchani_module.F90). AmberTools 26 includes the patch.
A conda environment recipe (PyTorch, CUDA, GFortran, OpenMPI) is provided in the upstream repo.
User-facing input¶
Minimal AIMNet2 example (full ML):
&cntrl
iextpot = 1,
/
&extpot
extprog = "TORCHANI",
/
&ani
model_type = "aimnet2",
use_double_precision = .true.,
use_cuda_device = .true.,
/
ML/MM example with electrostatic embedding via AIMNet2's predicted atomic charges:
&cntrl
ifqnt = 1,
/
&qmmm
qm_theory = "EXTERN",
/
&ani
model_type = "aimnet2",
mlmm_coupling = 1,
/
The mlmm_coupling keyword selects the QM/MM coupling scheme. The exact embedding details (which AIMNet2 charges are used, whether MM point charges polarise the QM region within a step, etc.) are documented in the upstream torchani-amber reference paper and source.
model_type may also be a full path to a TorchScript-jit-compiled .pt file. The v2 .pt assets shipped in aimnet/calculators/assets/ are torch.save state-dict archives, not TorchScript -- they cannot currently be passed as model_type directly. The TorchScript-export work in this repo would unblock shipping a self-contained AIMNet2 jit asset for torchani-amber rather than relying on the upstream-bundled model.
Caveats¶
- Hard rebuild of AmberTools required. Once linked, the AMBER binaries depend on the torchani libraries even for non-ML runs.
- Supported AMBER versions: AmberTools 25 (with patch) and 26.
- The integration is tied to the
torchani_sandboxbuild; the AIMNet2 model selection is constrained to whattorchani-amberexposes upstream until a custom jit.ptpath is provided.
Model coverage¶
What torchani-amber exposes upstream as model_type = "aimnet2". The NSE (open-shell) and rxn (reactive) AIMNet2 families are not currently exposed by the upstream integration.
Alternatives¶
If a full AmberTools rebuild is not acceptable, classical "AMBER force field plus ML region" workflows can sometimes be done via OpenMM + openmm-ml on AMBER-format topologies (loaded via ParmEd / OpenMM's AmberPrmtopFile). That route is pip-installable and needs no AMBER source build.