Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PyTorch 2.6.0 Compatibility Fix for Kaolin #865

Open
ansorre opened this issue Mar 8, 2025 · 0 comments
Open

PyTorch 2.6.0 Compatibility Fix for Kaolin #865

ansorre opened this issue Mar 8, 2025 · 0 comments

Comments

@ansorre
Copy link

ansorre commented Mar 8, 2025

Issue Summary

Kaolin currently encounters compilation errors with PyTorch 2.6.0 due to API changes in PyTorch. However, with minimal modifications to just three files, Kaolin can be successfully built and used with PyTorch 2.6.0.

Error Details

When attempting to build Kaolin with PyTorch 2.6.0, several CUDA compilation errors occur with the message:

error: no suitable conversion function from "const at::DeprecatedTypeProperties" to "c10::ScalarType" exists

This is due to the deprecation of the behavior of .type() in PyTorch 2.6.0, which previously could be used with AT_DISPATCH_FLOATING_TYPES_AND_HALF macro.

Solution

The fix is straightforward and involves modifying just three files by replacing all occurrences of .type() with .scalar_type():

  1. kaolin/csrc/ops/spc/query_cuda.cu
  2. kaolin/csrc/ops/spc/point_utils_cuda.cu
  3. kaolin/csrc/render/spc/raytrace_cuda.cu

For example, in query_cuda.cu, lines like:

AT_DISPATCH_FLOATING_TYPES_AND_HALF(query_coords.type(), "query_cuda", ([&] {
    // Implementation
}));

Should be changed to:

AT_DISPATCH_FLOATING_TYPES_AND_HALF(query_coords.scalar_type(), "query_cuda", ([&] {
    // Implementation
}));

Testing

After applying these changes, Kaolin successfully compiles and installs with PyTorch 2.6.0 on Windows, and all Kaolin nodes in ComfyUI load without errors.

Recommendation

It would be beneficial to update the Kaolin codebase to use .scalar_type() instead of .type() for future compatibility with PyTorch, as this method is the recommended approach moving forward.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant