Skip to content

Commit 38f4302

Browse files
vchuravyleios
andcommitted
Transition GPUArrays to KernelAbstractions
Co-authored-by: James Schloss <[email protected]>
1 parent 8c5d550 commit 38f4302

27 files changed

+383
-699
lines changed

.buildkite/pipeline.yml

+7-4
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ steps:
1010
1111
println("--- :julia: Instantiating project")
1212
Pkg.develop(; path=pwd())
13-
Pkg.develop(; name="CUDA")
13+
Pkg.add(; url="https://github.com/leios/CUDA.jl/", rev="GtK_trans")
1414
1515
println("+++ :julia: Running tests")
1616
Pkg.test("CUDA"; coverage=true)'
@@ -31,10 +31,13 @@ steps:
3131
3232
println("--- :julia: Instantiating project")
3333
Pkg.develop(; path=pwd())
34-
Pkg.develop(; name="oneAPI")
34+
Pkg.add(; url="https://github.com/leios/oneAPI.jl/", rev="GtK_transition")
3535
3636
println("+++ :julia: Building support library")
37-
include(joinpath(Pkg.devdir(), "oneAPI", "deps", "build_ci.jl"))
37+
filename = Base.find_package("oneAPI")
38+
filename = filename[1:findfirst("oneAPI.jl", filename)[1]-1]
39+
filename *= "../deps/build_ci.jl"
40+
include(filename)
3841
Pkg.activate()
3942
4043
println("+++ :julia: Running tests")
@@ -56,7 +59,7 @@ steps:
5659
5760
println("--- :julia: Instantiating project")
5861
Pkg.develop(; path=pwd())
59-
Pkg.develop(; name="Metal")
62+
Pkg.add(; url="https://github.com/leios/Metal.jl/", rev="GtK_transition")
6063
6164
println("+++ :julia: Running tests")
6265
Pkg.test("Metal"; coverage=true)'

Project.toml

+1
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@ version = "10.2.0"
55
[deps]
66
Adapt = "79e6a3ab-5dfb-504d-930d-738a2a938a0e"
77
GPUArraysCore = "46192b85-c4d5-4398-a991-12ede77f4527"
8+
KernelAbstractions = "63c18a36-062a-441e-b654-da1e3ab1ce7c"
89
LLVM = "929cbde3-209d-540e-8aea-75f648917ca0"
910
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
1011
Printf = "de0858da-6303-5e67-8744-51eddeeeb8d7"

docs/src/index.md

+2-3
Original file line numberDiff line numberDiff line change
@@ -9,10 +9,9 @@ will get a lot of functionality for free. This will allow to have multiple GPUAr
99
implementation for different purposes, while maximizing the ability to share code.
1010

1111
**This package is not intended for end users!** Instead, you should use one of the packages
12-
that builds on GPUArrays.jl. There is currently only a single package that actively builds
13-
on these interfaces, namely [CuArrays.jl](https://github.com/JuliaGPU/CuArrays.jl).
12+
that builds on GPUArrays.jl such as [CUDA](https://github.com/JuliaGPU/CUDA.jl), [AMDGPU](https://github.com/JuliaGPU/AMDGPU.jl), [OneAPI](https://github.com/JuliaGPU/oneAPI.jl), or [Metal](https://github.com/JuliaGPU/Metal.jl).
1413

15-
In this documentation, you will find more information on the interface that you are expected
14+
This documentation is meant for users who might wish to implement a version of GPUArrays for another GPU backend and will cover the features you will need
1615
to implement, the functionality you gain by doing so, and the test suite that is available
1716
to verify your implementation. GPUArrays.jl also provides a reference implementation of
1817
these interfaces on the CPU: The `JLArray` array type uses Julia's parallel programming

docs/src/interface.md

+18-39
Original file line numberDiff line numberDiff line change
@@ -1,53 +1,32 @@
11
# Interface
22

33
To extend the above functionality to a new array type, you should use the types and
4-
implement the interfaces listed on this page. GPUArrays is design around having two
5-
different array types to represent a GPU array: one that only ever lives on the host, and
4+
implement the interfaces listed on this page. GPUArrays is designed around having two
5+
different array types to represent a GPU array: one that exists only on the host, and
66
one that actually can be instantiated on the device (i.e. in kernels).
7+
Device functionality is then handled by [KernelAbstractions.jl](https://github.com/JuliaGPU/KernelAbstractions.jl).
78

9+
## Host abstractions
810

9-
## Device functionality
10-
11-
Several types and interfaces are related to the device and execution of code on it. First of
12-
all, you need to provide a type that represents your execution back-end and a way to call
13-
kernels:
11+
You should provide an array type that builds on the `AbstractGPUArray` supertype, such as:
1412

15-
```@docs
16-
GPUArrays.AbstractGPUBackend
17-
GPUArrays.AbstractKernelContext
18-
GPUArrays.gpu_call
19-
GPUArrays.thread_block_heuristic
2013
```
14+
mutable struct CustomArray{T, N} <: AbstractGPUArray{T, N}
15+
data::DataRef{Vector{UInt8}}
16+
offset::Int
17+
dims::Dims{N}
18+
...
19+
end
2120
22-
You then need to provide implementations of certain methods that will be executed on the
23-
device itself:
24-
25-
```@docs
26-
GPUArrays.AbstractDeviceArray
27-
GPUArrays.LocalMemory
28-
GPUArrays.synchronize_threads
29-
GPUArrays.blockidx
30-
GPUArrays.blockdim
31-
GPUArrays.threadidx
32-
GPUArrays.griddim
3321
```
3422

23+
This will allow your defined type (in this case `JLArray`) to use the GPUArrays interface where available.
24+
To be able to actually use the functionality that is defined for `AbstractGPUArray`s, you need to define the backend, like so:
3525

36-
## Host abstractions
37-
38-
You should provide an array type that builds on the `AbstractGPUArray` supertype:
39-
40-
```@docs
41-
AbstractGPUArray
4226
```
43-
44-
First of all, you should implement operations that are expected to be defined for any
45-
`AbstractArray` type. Refer to the Julia manual for more details, or look at the `JLArray`
46-
reference implementation.
47-
48-
To be able to actually use the functionality that is defined for `AbstractGPUArray`s, you
49-
should provide implementations of the following interfaces:
50-
51-
```@docs
52-
GPUArrays.backend
27+
import KernelAbstractions: Backend
28+
struct CustomBackend <: KernelAbstractions.GPU
29+
KernelAbstractions.get_backend(a::CA) where CA <: CustomArray = CustomBackend()
5330
```
31+
32+
There are numerous examples of potential interfaces for GPUArrays, such as with [JLArrays](https://github.com/JuliaGPU/GPUArrays.jl/blob/master/lib/JLArrays/src/JLArrays.jl), [CuArrays](https://github.com/JuliaGPU/CUDA.jl/blob/master/src/gpuarrays.jl), and [ROCArrays](https://github.com/JuliaGPU/AMDGPU.jl/blob/master/src/gpuarrays.jl).

lib/GPUArraysCore/src/GPUArraysCore.jl

+3-3
Original file line numberDiff line numberDiff line change
@@ -222,10 +222,10 @@ end
222222
223223
Gets the GPUArrays back-end responsible for managing arrays of type `T`.
224224
"""
225-
backend(::Type) = error("This object is not a GPU array") # COV_EXCL_LINE
226-
backend(x) = backend(typeof(x))
225+
get_backend(::Type) = error("This object is not a GPU array") # COV_EXCL_LINE
226+
get_backend(x) = get_backend(typeof(x))
227227

228228
# WrappedArray from Adapt for Base wrappers.
229-
backend(::Type{WA}) where WA<:WrappedArray = backend(unwrap_type(WA))
229+
get_backend(::Type{WA}) where WA<:WrappedArray = backend(unwrap_type(WA))
230230

231231
end # module GPUArraysCore

lib/JLArrays/Project.toml

+2-1
Original file line numberDiff line numberDiff line change
@@ -6,10 +6,11 @@ version = "0.1.4"
66
[deps]
77
Adapt = "79e6a3ab-5dfb-504d-930d-738a2a938a0e"
88
GPUArrays = "0c68f7d7-f131-5f86-a1c3-88cf8149b2d7"
9+
KernelAbstractions = "63c18a36-062a-441e-b654-da1e3ab1ce7c"
910
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
1011

1112
[compat]
1213
Adapt = "2.0, 3.0, 4.0"
1314
GPUArrays = "10"
14-
julia = "1.8"
1515
Random = "1"
16+
julia = "1.8"

0 commit comments

Comments
 (0)