Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
81 commits
Select commit Hold shift + click to select a range
7e69b65
cleaned up installation
stefdoerr Dec 11, 2025
016a744
keep just brute tests
stefdoerr Dec 11, 2025
79e6754
add triton dep
stefdoerr Dec 11, 2025
a231857
first triton and pytorch implementations
stefdoerr Dec 11, 2025
c60cd74
fix assertion error
stefdoerr Dec 12, 2025
37fea40
fixed last issue
stefdoerr Dec 12, 2025
160bc27
less computations
stefdoerr Dec 12, 2025
5e2f6d9
reorganized code. added first cell implementation
stefdoerr Dec 12, 2025
c16651d
upd
stefdoerr Dec 12, 2025
8a05c87
fixed all tests except one
stefdoerr Dec 12, 2025
ffa7b4a
added working cell implementation
stefdoerr Dec 12, 2025
161ba94
working with larger block_atoms
stefdoerr Dec 12, 2025
044edff
more efficient cell
stefdoerr Dec 12, 2025
621fee1
update the benchmark suite
stefdoerr Dec 12, 2025
98a45a9
shared triton implementation
stefdoerr Dec 12, 2025
c1a5957
update issue in benchmark
stefdoerr Dec 12, 2025
eca967d
cleanup
stefdoerr Dec 15, 2025
eea2831
cleanup
stefdoerr Dec 15, 2025
dd021d7
fix benchmark
stefdoerr Dec 15, 2025
40e1fc2
cell implementation closer to CUDA
stefdoerr Dec 15, 2025
3c692d4
use a while loop instead of breaking which doesn't work in triton
stefdoerr Dec 15, 2025
2ecd6da
better printing
stefdoerr Dec 15, 2025
f3dbc33
initial sorted cell list impl
stefdoerr Dec 15, 2025
7e544c8
fix benchmark printing
stefdoerr Dec 15, 2025
a56524c
nearly working cell impl
stefdoerr Dec 15, 2025
a4b4564
wip
stefdoerr Dec 15, 2025
ade0732
wip
stefdoerr Dec 15, 2025
96bdd98
different cell impl
stefdoerr Dec 16, 2025
c14981a
one more cell implementation
stefdoerr Dec 16, 2025
c07c6ab
cuda graph comp
stefdoerr Dec 16, 2025
99d2775
tiled version
stefdoerr Dec 16, 2025
5d3a0b2
another impl
stefdoerr Dec 16, 2025
0fb377e
faster version
stefdoerr Dec 16, 2025
0658dbc
memory coalesced cell neighbor impl
stefdoerr Dec 16, 2025
bd93ed4
cleanup and keep just the last cell version
stefdoerr Dec 16, 2025
bf2c761
removing shared memory implementation
stefdoerr Dec 16, 2025
8fffeae
cleanup and file headers
stefdoerr Dec 16, 2025
80b871d
remove CUDA implementations
stefdoerr Dec 16, 2025
1345c2d
missing function
stefdoerr Dec 16, 2025
874271e
fix for torch script
stefdoerr Dec 16, 2025
ca1bc3f
updating installation isntructions
stefdoerr Dec 16, 2025
7223b88
making triton optional
stefdoerr Dec 16, 2025
eae391c
install different triton package on windows and none on OSX
stefdoerr Dec 16, 2025
3ff58e2
simplify CI deployment and testing
stefdoerr Dec 16, 2025
82df594
try without lock file
stefdoerr Dec 16, 2025
697ba57
fix for flake?
stefdoerr Dec 16, 2025
e876132
fix python version
stefdoerr Dec 16, 2025
9fed54a
don't use cuda on ARM machines
stefdoerr Dec 16, 2025
28eda0e
don't try except in compilable code
stefdoerr Dec 16, 2025
a6e4e87
no triton on aarch64
stefdoerr Dec 16, 2025
7fbb4e2
cannot use delayed imports with torchscript
stefdoerr Dec 16, 2025
1023c42
merged main
stefdoerr Dec 16, 2025
807733c
add ase as a dep
stefdoerr Dec 16, 2025
f12b4d8
unfreeze torch version
stefdoerr Dec 17, 2025
53652e8
fix the OSX issue with MPS not supporting float64
stefdoerr Dec 17, 2025
51dd170
added test for scripting, then compiling
stefdoerr Dec 17, 2025
d2ac996
fix cuda graphing of torchscripted models. update tests
stefdoerr Dec 17, 2025
1a535e6
restore script+compile test
stefdoerr Dec 17, 2025
1dd9fd1
get rid of setup_for_compile_cudagraphs
stefdoerr Dec 17, 2025
56c5e49
fix test warnings
stefdoerr Dec 17, 2025
2ae3370
undo some changes to benchmarks
stefdoerr Dec 17, 2025
390011c
rename caffeine
stefdoerr Dec 17, 2025
54e7ac1
calculators should warmup before recompiling
stefdoerr Dec 17, 2025
d5c9aba
catch in output_modules also the case where we are compiling
stefdoerr Dec 17, 2025
d1273bf
int32 dtype for neighbor list and num_pairs
stefdoerr Dec 17, 2025
36c335b
added test for ASE calculator
stefdoerr Dec 17, 2025
6d37719
no need to trigger compilation anymore
stefdoerr Dec 17, 2025
6b8611a
no need to trigger compilation
stefdoerr Dec 17, 2025
1237cba
skip cuda test if no cuda available
stefdoerr Dec 17, 2025
4f9a600
skip on windows due to missing compiler
stefdoerr Dec 17, 2025
9179ad7
prevent triton recompilation with changing number of atoms and cutoffs
stefdoerr Dec 18, 2025
f80d972
use triton_wrap for compatibility with more pytorch features
stefdoerr Dec 18, 2025
6a56716
make scatter compilable, make box a registered buffer of OptimizedDis…
stefdoerr Dec 18, 2025
5aa41a9
fix backwards compatibility
stefdoerr Dec 18, 2025
7431f08
remove constraint inserted for exporting
stefdoerr Dec 18, 2025
ac98725
undo
stefdoerr Dec 18, 2025
04588ca
revert change to scatter
stefdoerr Dec 18, 2025
4118e16
cleanup
stefdoerr Dec 19, 2025
da53dcc
optimized the pytorch brute neighborlist implementation to not do O(n…
stefdoerr Dec 19, 2025
c7eab36
simplify
stefdoerr Dec 19, 2025
17a0c52
changing the neighbor arrays from torch.int32 to torch.long had a sig…
stefdoerr Dec 19, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
139 changes: 12 additions & 127 deletions .github/workflows/publish.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,166 +6,51 @@ on:

jobs:
build:
name: Build wheels on ${{ matrix.os }}-${{ matrix.accelerator }}
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, ubuntu-24.04-arm, windows-2022, macos-latest]
accelerator: [cpu, cu118, cu126] #, cu128]
exclude:
- os: ubuntu-24.04-arm
accelerator: cu118
- os: ubuntu-24.04-arm
accelerator: cu126
# - os: ubuntu-24.04-arm
# accelerator: cu128
- os: macos-latest
accelerator: cu118
- os: macos-latest
accelerator: cu126
# - os: macos-latest
# accelerator: cu128
name: Create source distribution
runs-on: ubuntu-latest

steps:
- name: Free space of Github Runner (otherwise it will fail by running out of disk)
if: matrix.os == 'ubuntu-latest'
run: |
sudo rm -rf /usr/share/dotnet
sudo rm -rf /opt/ghc
sudo rm -rf "/usr/local/share/boost"
sudo rm -rf "/usr/local/.ghcup"
sudo rm -rf "/usr/local/julia1.9.2"
sudo rm -rf "/usr/local/lib/android"
sudo rm -rf "$AGENT_TOOLSDIRECTORY"

- uses: actions/checkout@v4
- uses: actions/checkout@v5

- uses: actions/setup-python@v5
- uses: actions/setup-python@v6
with:
python-version: "3.13"

- name: Install cibuildwheel
run: python -m pip install cibuildwheel==3.1.3

- name: Activate MSVC
uses: ilammy/msvc-dev-cmd@v1
with:
toolset: 14.29
if: matrix.os == 'windows-2022'
run: pip install build

- name: Build wheels
if: matrix.os != 'windows-2022'
shell: bash
run: python -m cibuildwheel --output-dir wheelhouse
env:
ACCELERATOR: ${{ matrix.accelerator }}
CPU_TRAIN: ${{ runner.os == 'macOS' && 'true' || 'false' }}

- name: Build wheels
if: matrix.os == 'windows-2022'
shell: cmd # Use cmd on Windows to avoid bash environment taking priority over MSVC variables
run: python -m cibuildwheel --output-dir wheelhouse
env:
ACCELERATOR: ${{ matrix.accelerator }}
DISTUTILS_USE_SDK: "1" # Windows requires this to use vc for building
SKIP_TORCH_COMPILE: "true"
- name: Build pypi package
run: python -m build --sdist

- uses: actions/upload-artifact@v4
with:
name: ${{ matrix.accelerator }}-cibw-wheels-${{ matrix.os }}-${{ strategy.job-index }}
path: ./wheelhouse/*.whl
name: source_dist
path: dist/*.tar.gz

publish-to-public-pypi:
publish-to-pypi:
name: >-
Publish Python 🐍 distribution 📦 to PyPI
needs:
- build
runs-on: ubuntu-latest
environment:
name: pypi
url: https://pypi.org/p/torchmd-net
permissions:
id-token: write # IMPORTANT: mandatory for trusted publishing
strategy:
fail-fast: false
matrix:
accelerator: [cpu, cu118, cu126] #, hip, cu124, cu126, cu128]

steps:
- name: Download all the dists
uses: actions/download-artifact@v4
with:
pattern: "${{ matrix.accelerator }}-cibw-wheels*"
path: dist/
merge-multiple: true

- name: Publish distribution 📦 to PyPI
uses: pypa/gh-action-pypi-publish@release/v1
with:
password: ${{ secrets.TMDNET_PYPI_API_TOKEN }}
skip-existing: true

# publish-to-accelera-pypi:
# name: >-
# Publish Python 🐍 distribution 📦 to Acellera PyPI
# needs:
# - build
# runs-on: ubuntu-latest
# permissions: # Needed for GCP authentication
# contents: "read"
# id-token: "write"
# strategy:
# fail-fast: false
# matrix:
# accelerator: [cpu, cu118, cu126, cu128]

# steps:
# - uses: actions/checkout@v4 # Needed for GCP authentication for some reason

# - name: Set up Cloud SDK
# uses: google-github-actions/auth@v2
# with:
# workload_identity_provider: ${{ secrets.GCP_WORKLOAD_IDENTITY_PROVIDER }}
# service_account: ${{ secrets.GCP_PYPI_SERVICE_ACCOUNT }}

# - name: Download all the dists
# uses: actions/download-artifact@v4
# with:
# pattern: "${{ matrix.accelerator }}-cibw-wheels*"
# path: dist/
# merge-multiple: true

# - name: Publish distribution 📦 to Acellera PyPI
# run: |
# pip install build twine keyring keyrings.google-artifactregistry-auth
# pip install -U packaging
# twine upload --repository-url https://us-central1-python.pkg.dev/pypi-packages-455608/${{ matrix.accelerator }} dist/* --verbose --skip-existing

# publish-to-official-pypi:
# name: >-
# Publish Python 🐍 distribution 📦 to PyPI
# needs:
# - build
# runs-on: ubuntu-latest
# environment:
# name: pypi
# url: https://pypi.org/p/torchmd-net
# permissions:
# id-token: write # IMPORTANT: mandatory for trusted publishing

# steps:
# - name: Download all the dists
# uses: actions/download-artifact@v4
# with:
# pattern: "cu118-cibw-wheels*"
# path: dist/
# merge-multiple: true

# - name: Publish distribution 📦 to PyPI
# uses: pypa/gh-action-pypi-publish@release/v1
# with:
# password: ${{ secrets.TMDNET_PYPI_API_TOKEN }}
# skip_existing: true
skip_existing: true

github-release:
name: >-
Expand Down
85 changes: 10 additions & 75 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,93 +17,28 @@ jobs:
["ubuntu-latest", "ubuntu-22.04-arm", "macos-latest", "windows-2022"]
python-version: ["3.13"]

defaults: # Needed for conda
run:
shell: bash -l {0}

steps:
- name: Check out
uses: actions/checkout@v4

- uses: conda-incubator/setup-miniconda@v3
with:
python-version: ${{ matrix.python-version }}
channels: conda-forge
conda-remove-defaults: "true"
if: matrix.os != 'macos-13'
uses: actions/checkout@v5

- uses: conda-incubator/setup-miniconda@v3
- name: Install uv
uses: astral-sh/setup-uv@v7
with:
python-version: ${{ matrix.python-version }}
channels: conda-forge
mamba-version: "*"
conda-remove-defaults: "true"
if: matrix.os == 'macos-13'

- name: Install OS-specific compilers
run: |
if [[ "${{ matrix.os }}" == "ubuntu-22.04-arm" ]]; then
conda install gxx --channel conda-forge --override-channels
elif [[ "${{ runner.os }}" == "Linux" ]]; then
conda install gxx --channel conda-forge --override-channels
elif [[ "${{ runner.os }}" == "macOS" ]]; then
conda install llvm-openmp pybind11 --channel conda-forge --override-channels
echo "CC=clang" >> $GITHUB_ENV
echo "CXX=clang++" >> $GITHUB_ENV
elif [[ "${{ runner.os }}" == "Windows" ]]; then
conda install vc vc14_runtime vs2015_runtime --channel conda-forge --override-channels
fi

- name: List the conda environment
run: conda list

- name: Install testing packages
run: conda install -y -c conda-forge flake8 pytest psutil python-build
- name: Install the project
run: uv sync --all-extras --dev

- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
uv run flake8 ./torchmdnet --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics

- name: Set pytorch index
run: |
if [[ "${{ runner.os }}" == "Windows" ]]; then
mkdir -p "C:\ProgramData\pip"
echo "[global]
extra-index-url = https://download.pytorch.org/whl/cpu" > "C:\ProgramData\pip\pip.ini"
else
mkdir -p $HOME/.config/pip
echo "[global]
extra-index-url = https://download.pytorch.org/whl/cpu" > $HOME/.config/pip/pip.conf
fi

- name: Build and install the package
run: |
if [[ "${{ runner.os }}" == "Windows" ]]; then
export LIB="C:/Miniconda/envs/test/Library/lib"
fi
python -m build
pip install dist/*.whl
env:
ACCELERATOR: "cpu"

# - name: Install nnpops
# if: matrix.os == 'ubuntu-latest' || matrix.os == 'macos-latest'
# run: conda install nnpops --channel conda-forge --override-channels

- name: List the conda environment
run: conda list
uv run flake8 ./torchmdnet --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics

- name: Run tests
run: pytest -v -s --durations=10
env:
ACCELERATOR: "cpu"
SKIP_TORCH_COMPILE: ${{ runner.os == 'Windows' && 'true' || 'false' }}
OMP_PREFIX: ${{ runner.os == 'macOS' && '/Users/runner/miniconda3/envs/test' || '' }}
CPU_TRAIN: ${{ runner.os == 'macOS' && 'true' || 'false' }}
LONG_TRAIN: "true"
# For example, using `pytest`
run: uv run pytest tests

- name: Test torchmd-train utility
run: torchmd-train --help
run: uv run torchmd-train --help
20 changes: 9 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,19 +21,17 @@ Documentation is available at https://torchmd-net.readthedocs.io


## Installation
TorchMD-Net is available as a pip installable wheel as well as in [conda-forge](https://conda-forge.org/)
TorchMD-Net is available as a pip package as well as in [conda-forge](https://conda-forge.org/)

TorchMD-Net provides builds for CPU-only, CUDA 11 and CUDA 12. CPU versions are only provided as reference,
as the performance will be extremely limited.
Depending on which variant you wish to install, you can install it with one of the following commands:
As TorchMD-Net depends on PyTorch we need to add additional index URLs to the installation command as per [pytorch](https://pytorch.org/get-started/locally/)

```sh
# The following will install the CUDA 11.8 version
pip install torchmd-net-cu11 --extra-index-url https://download.pytorch.org/whl/cu118
# The following will install the CUDA 12.4 version
pip install torchmd-net-cu12 --extra-index-url https://download.pytorch.org/whl/cu124
# The following will install the CPU only version (not recommended)
pip install torchmd-net-cpu --extra-index-url https://download.pytorch.org/whl/cpu
# The following will install TorchMD-Net with PyTorch CUDA 11.8 version
pip install torchmd-net --extra-index-url https://download.pytorch.org/whl/cu118
# The following will install TorchMD-Net with PyTorch CUDA 12.4 version
pip install torchmd-net --extra-index-url https://download.pytorch.org/whl/cu124
# The following will install TorchMD-Net with PyTorch CPU only version (not recommended)
pip install torchmd-net --extra-index-url https://download.pytorch.org/whl/cpu
```

Alternatively it can be installed with conda or mamba with one of the following commands.
Expand All @@ -46,7 +44,7 @@ mamba install torchmd-net cuda-version=12.4

### Install from source

TorchMD-Net is installed using pip, but you will need to install some dependencies before. Check [this documentation page](https://torchmd-net.readthedocs.io/en/latest/installation.html#install-from-source).
TorchMD-Net is installed using pip with `pip install -e .` to create an editable install.

## Usage
Specifying training arguments can either be done via a configuration yaml file or through command line arguments directly. Several examples of architectural and training specifications for some models and datasets can be found in [examples/](https://github.com/torchmd/torchmd-net/tree/main/examples). Note that if a parameter is present both in the yaml file and the command line, the command line version takes precedence.
Expand Down
Loading