Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Where to host extension(s) #2735

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
RainerHeintzmann opened this issue Apr 10, 2025 · 7 comments
Closed

Where to host extension(s) #2735

RainerHeintzmann opened this issue Apr 10, 2025 · 7 comments

Comments

@RainerHeintzmann
Copy link
Contributor

To make existing packages CUDA-capable, one can use the Adapt mechanism and write an extension.
This extension can either reside in the supporting package (e.g. ShiftedArrays.jl) or possibly in the ext folder in the CUDA.jl package.
As for ShiftedArrays.jl there were PRs more a year ago, trying to add such cuda support, but this never led to a merge and a new release.
JuliaArrays/ShiftedArrays.jl#67
One could also construct a package with only the extension in it, but this seem to be a clear example of type piracy, i.e. not the right way to go.
The question is therefore, whether I should port move the file in ext from this PR
JuliaArrays/ShiftedArrays.jl#70
to the ext folder of CUDA.jl and make a PR here.
Similarly one cold do so for the FourierTools.jl package:
bionanoimaging/FourierTools.jl#56

Is this an option? Would be great to get this finally done.

@vchuravy
Copy link
Member

My initial reaction was: Why is your extension depending on CUDA at all? It should only need Adapt.jl

Looking at your code the issue seems to be that you want to define a BroadcastStyle that promotes to CUDA... I wonder if we could have a "Adapt broadcast style"

The goal of Adapt is to not require a GPU dependency

@RainerHeintzmann
Copy link
Contributor Author

Yes, you are right. The adapt_structure just forwards to the parent type and does not need any CUDA dependency. However, the BroadcastStyle has to be CUDA specific and a few extra tweaks are needed. It would be great, if there was an easy general mechanism for wrapper types such that broadcasting cause getindex to automatically compile for the GPU into appropriate kernels.
I guess I do not really understand what an "Adapt broadcast style" would be and how to realize it.

@maleadt
Copy link
Member

maleadt commented Apr 11, 2025

The question is therefore, whether I should port move the file in ext from this PR
JuliaArrays/ShiftedArrays.jl#70
to the ext folder of CUDA.jl and make a PR here.

I'm not a fan of that, no. That code will then either go untested, or require us to maintain it (while being unfamiliar with the it), both of which are not ideal. The Enzyme.jl extension has demonstrated that being problematic in the past.

I do agree that it would be nice if these extensions were back-end agnostic (e.g., instead depending on GPUArrays for the AbstractGPUArray type), but that would require somebody to delve into the broadcast code again...

@RainerHeintzmann
Copy link
Contributor Author

Thanks, I can understand your concerns. The current runtests.jl does perform extensive testing also in Cuda if supported by the system, but I can see that maintance is an issue.
The actual code is fairly small but then I can see that there are probably also a number of functions which are not automatically fully supported correctly. In all the places where standard Julia iterate through indices there is a potential issue.

@vchuravy
Copy link
Member

Regarding BroadcastStyle could you define?

Adapt.parent_type(::ShiftedArray{<:Any,<:Any,<:Any,Parent}) where Parent = Parent
Adapt.unwrap_type(W::Type{<:ShiftedArray}) = unwrap_type(parent_type(W))

BroadcastStyle(W::Type{<:ShiftedArray}) = BroadcastStyle(unwrap_type(W))

@vchuravy
Copy link
Member

I tried out my own suggestions over here JuliaArrays/OffsetArrays.jl#378

@RainerHeintzmann
Copy link
Contributor Author

Thanks for the good suggestion. I did not know about the unwrap_type definition. This should simplify the code. I will give it a try. However the main problem remains: Where to host these modifications. In particular, if a well tested package looks like it is not actively being maintained any more.

@JuliaGPU JuliaGPU locked and limited conversation to collaborators May 20, 2025
@maleadt maleadt converted this issue into discussion #2781 May 20, 2025

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants