Skip to content

wheelnext/variant-abi-dependency-example

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

variant-abi-dependency-example

Scenarios

# Make your life easy and just force the environment to resolve to this ;)
export NV_VARIANT_PROVIDER_FORCE_CUDA_DRIVER_VERSION=12.8  # You might need to change this if you want to pickup CUDA13 wheels
export NV_VARIANT_PROVIDER_FORCE_SM_ARCH=9.0 # This is the safest value - always works!

uv pip install torch==2.8.0 flash-attn
uv pip install torch==2.9.0 flash-attn
uv pip install torch flash-attn

uv pip install torch==2.8.0 && uv pip install flash-attn
uv pip install torch==2.9.0 && uv pip install flash-attn
uv pip install torch && uv pip install flash-attn

uv sync

Final Boss

Once the above work - a "really complicated scenario"

export NV_VARIANT_PROVIDER_FORCE_CUDA_DRIVER_VERSION=12.8  # If you set to CUDA 13 -> resolution should fail
export NV_VARIANT_PROVIDER_FORCE_SM_ARCH=9.0
uv pip install flash-attn vllm

This scenario is specifically complicated because:

  • Both depends on torch
  • flash-attn allows both torch==2.8.0 and torch==2.9.0 - Supports both CUDA12 & CUDA 13
  • vllm only allows torch==2.8.0 AND only CUDA12

So the resolver must identify that the only valid match is:

  • vllm: vllm-0.11.0-py3-none-any-cu128_pyt2.8.0.whl (there's only available here)
  • flash-attn: flash_attn-2.8.0-py3-none-any-cu12_pyt2.8.0.whl
  • torch (one of):
    • torch-2.8.0-py3-none-any-cuda12.6.whl
    • torch-2.8.0-py3-none-any-cuda12.8.whl
    • CUDA 13 build is EXCLUDED