Skip to content

Conversation

@ryan-steed-usa
Copy link
Contributor

This change might restore support for Maxwell and Pascal architectures.

- Updated GPU dependency from torch==2.8.0+cu129 to torch==2.8.0+cu126 in pyproject.toml
- Changed PyTorch CUDA index URL from https://download.pytorch.org/whl/cu129 to https://download.pytorch.org/whl/cu126
- This change ensures compatibility with CUDA 12.6 runtime while maintaining the same PyTorch version (2.8.0)
@ryan-steed-usa
Copy link
Contributor Author

Closes #406

@ryan-steed-usa ryan-steed-usa changed the title fix: update PyTorch CUDA version from cu129 to cu126 fix: downgrade PyTorch CUDA version from cu129 to cu126 Nov 1, 2025
@remsky
Copy link
Owner

remsky commented Nov 5, 2025

Hey @ryan-steed-usa, is this still a draft?

@ryan-steed-usa
Copy link
Contributor Author

Hi @remsky, I was hoping for confirmation from a Maxwell or Pascal CUDA user but everything seems to work containerized with my Ada Lovelace GPUs. Otherwise I think it's ready to go.

@ryan-steed-usa ryan-steed-usa marked this pull request as ready for review November 5, 2025 03:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants