Skip to content

Commit 23b74c5

Browse files
Merge pull request #1365 from NiklasGustafsson/main
Moving to libtorch 2.4.0
2 parents a926e17 + 4b81386 commit 23b74c5

26 files changed

+207
-198
lines changed

DEVGUIDE.md

+15-15
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ Build with
5252

5353
## Packages
5454

55-
An ephemeral feed of packages from Azure DevOps CI is available for those
55+
An ephemeral feed of packages from Azure DevOps CI is available for those
5656

5757
* View link: https://dotnet.visualstudio.com/TorchSharp/_packaging?_a=feed&feed=SignedPackages
5858
* Nuget feed: https://dotnet.pkgs.visualstudio.com/TorchSharp/_packaging/SignedPackages/nuget/v3/index.json
@@ -77,7 +77,7 @@ To change the TorchSharp package version update this [file](https://github.com/d
7777
The TorchSharp package is pushed to nuget.org via Azure DevOps CI release pipeline. Assuming you're not building or updating the LibTorch packages
7878
(`BuildLibTorchPackages` is `false` in [azure-pipelines.yml](azure-pipelines.yml)) this is pretty simple once you have the permissions:
7979

80-
1. Update the version number in [./build/BranchInfo.props](./build/BranchInfo.props) and in the [Release Notes](./RELEASENOTES.md) file and then submit a PR.
80+
1. Update the version number in [./build/BranchInfo.props](./build/BranchInfo.props) and in the [Release Notes](./RELEASENOTES.md) file and then submit a PR.
8181

8282
Updating the major or minor version number should only be done after a discussion with repo admins. The patch number should be incremented by one each release and set to zero after a change to the major or minor version.
8383
2. Integrate code to main and wait for CI to process
@@ -149,7 +149,7 @@ For this reason, we do the following
149149
This project grabs LibTorch and makes a C API wrapper for it, then calls these from C#. When updating to a newer
150150
version of PyTorch then quite a lot of careful work needs to be done.
151151

152-
0. Make sure you have plenty of disk space, e.g. 15GB.
152+
0. Make sure you have plenty of disk space, e.g. 15GB.
153153

154154
1. Clean and reset to main
155155

@@ -163,7 +163,7 @@ version of PyTorch then quite a lot of careful work needs to be done.
163163
https://download.pytorch.org/libtorch/cpu/libtorch-cxx11-abi-shared-with-deps-2.2.0%2Bcpu.zip
164164

165165
Don't download anything yet, or manually. The downloads are acquired automatically in step 2.
166-
166+
167167
To update the version, update this in [Dependencies.props](build/Dependencies.props):
168168

169169
<LibTorchVersion>2.2.0</LibTorchVersion>
@@ -179,8 +179,8 @@ version of PyTorch then quite a lot of careful work needs to be done.
179179
On Windows:
180180

181181
dotnet build src\Redist\libtorch-cpu\libtorch-cpu.proj /p:UpdateSHA=true /p:TargetOS=linux /p:Configuration=Release /t:Build /p:IncludeLibTorchCpuPackages=true
182-
dotnet build src\Redist\libtorch-cpu\libtorch-cpu.proj /p:UpdateSHA=true /p:TargetOS=mac /p:Configuration=Release /t:Build /p:IncludeLibTorchCpuPackages=true
183-
dotnet build src\Redist\libtorch-cpu\libtorch-cpu.proj /p:UpdateSHA=true /p:TargetOS=windows /p:Configuration=Release /t:Build /p:IncludeLibTorchCpuPackages=true
182+
dotnet build src\Redist\libtorch-cpu\libtorch-cpu.proj /p:UpdateSHA=true /p:TargetOS=mac /p:TargetArchitecture=arm64 /p:Configuration=Release /t:Build /p:IncludeLibTorchCpuPackages=true
183+
dotnet build src\Redist\libtorch-cpu\libtorch-cpu.proj /p:UpdateSHA=true /p:TargetOS=windows /p:Configuration=Release /t:Build /p:IncludeLibTorchCpuPackages=true
184184
dotnet build src\Redist\libtorch-cpu\libtorch-cpu.proj /p:UpdateSHA=true /p:TargetOS=windows /p:Configuration=Debug /t:Build /p:IncludeLibTorchCpuPackages=true
185185

186186
dotnet build src\Redist\libtorch-cuda-12.1\libtorch-cuda-12.1.proj /p:UpdateSHA=true /p:TargetOS=linux /p:Configuration=Release /t:Build /p:IncludeLibTorchCudaPackages=true
@@ -190,8 +190,8 @@ On Windows:
190190
On Linux / Mac:
191191

192192
dotnet build src/Redist/libtorch-cpu/libtorch-cpu.proj /p:UpdateSHA=true /p:TargetOS=linux /p:Configuration=Release /t:Build /p:IncludeLibTorchCpuPackages=true
193-
dotnet build src/Redist/libtorch-cpu/libtorch-cpu.proj /p:UpdateSHA=true /p:TargetOS=mac /p:Configuration=Release /t:Build /p:IncludeLibTorchCpuPackages=true
194-
dotnet build src/Redist/libtorch-cpu/libtorch-cpu.proj /p:UpdateSHA=true /p:TargetOS=windows /p:Configuration=Release /t:Build /p:IncludeLibTorchCpuPackages=true
193+
dotnet build src/Redist/libtorch-cpu/libtorch-cpu.proj /p:UpdateSHA=true /p:TargetOS=mac /p:TargetArchitecture=arm64 /p:Configuration=Release /t:Build /p:IncludeLibTorchCpuPackages=true
194+
dotnet build src/Redist/libtorch-cpu/libtorch-cpu.proj /p:UpdateSHA=true /p:TargetOS=windows /p:Configuration=Release /t:Build /p:IncludeLibTorchCpuPackages=true
195195
dotnet build src/Redist/libtorch-cpu/libtorch-cpu.proj /p:UpdateSHA=true /p:TargetOS=windows /p:Configuration=Debug /t:Build /p:IncludeLibTorchCpuPackages=true
196196

197197
dotnet build src/Redist/libtorch-cuda-12.1/libtorch-cuda-12.1.proj /p:UpdateSHA=true /p:TargetOS=linux /p:Configuration=Release /t:Build /p:IncludeLibTorchCudaPackages=true
@@ -213,7 +213,7 @@ On Linux / Mac:
213213
dir bin\obj\x64.Release\libtorch-cpu\libtorch-cxx11-abi-shared-with-deps-2.2.0cpu\libtorch\lib\*.so*
214214

215215
You may also need to precisely refactor the binaries into multiple parts so each package ends up under ~300MB. Before release 2.2.0 of libtorch, this really only affected the CUDA packagages, but it is now also affecting the CPU packages on Linux and OSX. Windows CPU is still small enough to be contained in just one package. The NuGet gallery does not allow packages larger than 250MB, so if files are 300MB, after compression, they are likely to be smaller than 250MB. However, you have to look out: if the compression is poor, then packages may end up larger. Note that it is 250 million
216-
bytes that is the limit, **not** 250*1024*1024. In other words, it is 250 MB, not 250 MiB. Note that Windows Explorer will show file sizes in KiB, not thousands of bytes. Use 'dir' from a CMD window to get the exact size in bytes for each file. For example -- the file `libtorch_cpu.so` shows up as 511,872 KB in Windows Explorer, but 524,156,144 bytes in CMD. The 2.4% difference can be significant. Getting the partitioning right requires precision.
216+
bytes that is the limit, **not** 250*1024*1024. In other words, it is 250 MB, not 250 MiB. Note that Windows Explorer will show file sizes in KiB, not thousands of bytes. Use 'dir' from a CMD window to get the exact size in bytes for each file. For example -- the file `libtorch_cpu.so` shows up as 511,872 KB in Windows Explorer, but 524,156,144 bytes in CMD. The 2.4% difference can be significant. Getting the partitioning right requires precision.
217217

218218
If the combined size of the files going into a part is smaller than 250MB, then everything is fine, and there is no need to split the part. It can be singular. If that is not the case, then the part should be fragmented into two or more parts that are linked together by their names.
219219

@@ -229,12 +229,12 @@ On Linux / Mac:
229229
They must all be called either 'primary,' which should be the first fragment, or 'fragmentN' where 'N' is the ordinal number of the fragment, starting with '1'. The current logic allows for as many as 10 non-primary fragments. If more are needed, the code in [FileRestitcher.cs](pkg/FileRestitcher/FileRestitcher/FileRestitcher.cs) and [RestitchPackage.targets](pkg/common/RestitchPackage.targets) needs to be updated. Note that the size of each fragment is expressed in bytes, and that fragment start must be
230230
the sum of the size of all previous fragments. A '-1' should be used for the last fragment (and only for the last fragment): it means that the fragment size will be based on how much there is still left of the file.
231231

232-
Each part, whether singular or fragmented, should have its own .nupkgproj file in its own folder under pkg. The folder and file should have the same name as the part. If you need to add new fragments, it is straightforward to just copy an existing fragment folder and rename it as well as the project file to the new fragment.
233-
232+
Each part, whether singular or fragmented, should have its own .nupkgproj file in its own folder under pkg. The folder and file should have the same name as the part. If you need to add new fragments, it is straightforward to just copy an existing fragment folder and rename it as well as the project file to the new fragment.
233+
234234
__Important:__
235235

236-
If you must fragment a previously singular part, it is best to rename the existing folder and file to '-fragment1' and then copy a '-primary' folder and rename with the right part name. This is because the primary .nupkgproj files look different from others.
237-
236+
If you must fragment a previously singular part, it is best to rename the existing folder and file to '-fragment1' and then copy a '-primary' folder and rename with the right part name. This is because the primary .nupkgproj files look different from others.
237+
238238
Specifically, they include different build targets:
239239

240240
```xml
@@ -295,8 +295,8 @@ On Linux / Mac:
295295

296296
<LibTorchPackageVersion>2.0.1.1</LibTorchPackageVersion>
297297

298-
dotnet pack -c Release -v:n /p:SkipNative=true /p:SkipTests=true /p:IncludeTorchSharpPackage=true /p:IncludeLibTorchCpuPackages=true /p:IncludeLibTorchCudaPackages=true
299-
dotnet pack -c Release -v:n /p:SkipNative=true /p:SkipTests=true /p:TargetOS=linux /p:IncludeTorchSharpPackage=true /p:IncludeLibTorchCpuPackages=true /p:IncludeLibTorchCudaPackages=true
298+
dotnet pack -c Release -v:n /p:SkipNative=true /p:SkipTests=true /p:IncludeTorchSharpPackage=true /p:IncludeLibTorchCpuPackages=true /p:IncludeLibTorchCudaPackages=true
299+
dotnet pack -c Release -v:n /p:SkipNative=true /p:SkipTests=true /p:TargetOS=linux /p:IncludeTorchSharpPackage=true /p:IncludeLibTorchCpuPackages=true /p:IncludeLibTorchCudaPackages=true
300300

301301
Once these finish, the output can be found in `bin\packages\Release`. Look at the file sizes -- if anything is larger than 250,000,000 bytes, you need to go back to #3 above and redefine the package contents and fragmentation scheme. It maybe necessary to introduce new fragments.
302302

Directory.Build.props

+5-7
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,8 @@
2020
<SourceDir>$(RepoRoot)src/</SourceDir>
2121
<PkgDir>$(RepoRoot)pkg/</PkgDir>
2222

23-
<LibTorchPackageVersion>2.2.1.1</LibTorchPackageVersion>
23+
<LibTorchPackageVersion>2.4.0.0</LibTorchPackageVersion>
24+
<LibTorchPackageVersion Condition="'$(TargetOS)' == 'mac' and '$(TargetArchitecture)' == 'x64'">2.2.2.0</LibTorchPackageVersion>
2425

2526
<!-- when building on local machines the massive downloads get placed up one directory -->
2627
<!-- so we can clean without triggering a re-download-->
@@ -55,7 +56,6 @@
5556

5657
<TargetRuntimeID Condition="'$(TargetOS)' == 'windows'">win-x64</TargetRuntimeID>
5758
<TargetRuntimeID Condition="'$(TargetOS)' == 'linux'">linux-x64</TargetRuntimeID>
58-
<TargetRuntimeID Condition="'$(TargetOS)' == 'mac'">osx-x64</TargetRuntimeID>
5959
<TargetRuntimeID Condition="'$(TargetPlatform)' == 'mac-arm64'">osx-arm64</TargetRuntimeID>
6060

6161
<TargetRuntimeID Condition="'$(TargetOS)' == 'windows'">win-x64</TargetRuntimeID>
@@ -86,7 +86,8 @@
8686
<!-- use stable versions for libtorch packages based on LibTorch version number scheme-->
8787
<!-- we manually update these -->
8888
<PropertyGroup Condition="'$(MSBuildProjectName.IndexOf(`libtorch-`))' != '-1'">
89-
<LibTorchPackageVersion>2.2.1.1</LibTorchPackageVersion>
89+
<LibTorchPackageVersion>2.4.0.0</LibTorchPackageVersion>
90+
<LibTorchPackageVersion Condition="'$(TargetOS)' == 'mac' and '$(TargetArchitecture)' == 'x64'">2.2.2.0</LibTorchPackageVersion>
9091
<EnablePackageValidation>false</EnablePackageValidation>
9192
<VersionPrefix>$(LibTorchPackageVersion)</VersionPrefix>
9293
<VersionSuffix></VersionSuffix>
@@ -135,17 +136,14 @@
135136

136137
<PropertyGroup>
137138
<LibTorchArchiveSource>pytorch</LibTorchArchiveSource>
138-
<LibTorchArchiveSource Condition="'$(TargetPlatform)' == 'mac-arm64'">conda</LibTorchArchiveSource>
139-
<CondaArchivePlatformName Condition="'$(TargetPlatform)' == 'mac-arm64'">osx-arm64</CondaArchivePlatformName>
140139
<LibTorchCpuArchiveNameSuffix Condition="'$(TargetOS)' != 'mac'">%252Bcpu</LibTorchCpuArchiveNameSuffix>
141-
<LibTorchCpuArchiveNameSuffix Condition="'$(LibTorchArchiveSource)' == 'conda'">-py3.10_0</LibTorchCpuArchiveNameSuffix>
142140
<LibTorchCudaArchiveNameSuffix>%252Bcu$(CudaVersionNoDot)</LibTorchCudaArchiveNameSuffix>
143141
<LibTorchCpuLocalNameSuffix>cpu</LibTorchCpuLocalNameSuffix>
144142
<LibTorchCudaLocalNameSuffix>cu$(CudaVersionNoDot)</LibTorchCudaLocalNameSuffix>
145143
<LibTorchArchiveCoreName Condition="'$(TargetOS)' == 'windows'">libtorch-win-shared-with-deps$(LibTorchDebug)</LibTorchArchiveCoreName>
146144
<LibTorchArchiveCoreName Condition="'$(TargetOS)' == 'linux'">libtorch-cxx11-abi-shared-with-deps</LibTorchArchiveCoreName>
147145
<LibTorchArchiveCoreName Condition="'$(TargetOS)' == 'mac'">libtorch-macos-x86_64</LibTorchArchiveCoreName>
148-
<LibTorchArchiveCoreName Condition="'$(LibTorchArchiveSource)' == 'conda'">pytorch</LibTorchArchiveCoreName>
146+
<LibTorchArchiveCoreName Condition="'$(TargetPlatform)' == 'mac-arm64'">libtorch-macos-arm64</LibTorchArchiveCoreName>
149147
<LibTorchCpuArchiveBase>$(LibTorchArchiveCoreName)-$(LibTorchVersion)$(LibTorchCpuArchiveNameSuffix)</LibTorchCpuArchiveBase>
150148
<LibTorchCudaArchiveBase>$(LibTorchArchiveCoreName)-$(LibTorchVersion)$(LibTorchCudaArchiveNameSuffix)</LibTorchCudaArchiveBase>
151149
<LibTorchCpuLocalBase>$(LibTorchArchiveCoreName)-$(LibTorchVersion)$(LibTorchCpuLocalNameSuffix)</LibTorchCpuLocalBase>

Directory.Build.targets

+17-12
Original file line numberDiff line numberDiff line change
@@ -24,35 +24,36 @@
2424

2525
<!-- Windows CUDA 11.3 libtorch binary list used for examples and testing -->
2626
<ItemGroup Condition="'$(NativeTargetArchitecture)' == 'x64' and '$(OS)' == 'Windows_NT' and '$(TestUsesLibTorch)' == 'true' and ('$(TestCuda)' == 'true' and '$(SkipCuda)' != 'true') and '$(SkipNative)' != 'true' ">
27-
<NativeAssemblyReference Include="c10" Variant="cuda\" />
2827
<NativeAssemblyReference Include="asmjit" Variant="cuda\" />
28+
<NativeAssemblyReference Include="c10" Variant="cuda\" />
2929
<NativeAssemblyReference Include="c10_cuda" Variant="cuda\" />
3030
<NativeAssemblyReference Include="caffe2_nvrtc" Variant="cuda\" />
3131
<NativeAssemblyReference Include="cublas64_12" Variant="cuda\" />
3232
<NativeAssemblyReference Include="cublasLt64_12" Variant="cuda\" />
3333
<NativeAssemblyReference Include="cudart64_12" Variant="cuda\" />
34-
<NativeAssemblyReference Include="cudnn_adv_infer64_8" Variant="cuda\" />
35-
<NativeAssemblyReference Include="cudnn_adv_train64_8" Variant="cuda\" />
36-
<NativeAssemblyReference Include="cudnn_cnn_infer64_8" Variant="cuda\" />
37-
<NativeAssemblyReference Include="cudnn_cnn_train64_8" Variant="cuda\" />
38-
<NativeAssemblyReference Include="cudnn_ops_infer64_8" Variant="cuda\" />
39-
<NativeAssemblyReference Include="cudnn_ops_train64_8" Variant="cuda\" />
40-
<NativeAssemblyReference Include="cudnn64_8" Variant="cuda\" />
34+
<NativeAssemblyReference Include="cudnn64_9" Variant="cuda\" />
35+
<NativeAssemblyReference Include="cudnn_adv64_9" Variant="cuda\" />
36+
<NativeAssemblyReference Include="cudnn_cnn64_9" Variant="cuda\" />
37+
<NativeAssemblyReference Include="cudnn_engines_precompiled64_9" Variant="cuda\" />
38+
<NativeAssemblyReference Include="cudnn_engines_runtime_compiled64_9" Variant="cuda\" />
39+
<NativeAssemblyReference Include="cudnn_graph64_9" Variant="cuda\" />
40+
<NativeAssemblyReference Include="cudnn_heuristic64_9" Variant="cuda\" />
41+
<NativeAssemblyReference Include="cudnn_ops64_9" Variant="cuda\" />
4142
<NativeAssemblyReference Include="cufft64_11" Variant="cuda\" />
4243
<NativeAssemblyReference Include="cufftw64_11" Variant="cuda\" />
43-
<NativeAssemblyReference Include="curand64_10" Variant="cuda\" />
4444
<NativeAssemblyReference Include="cupti64_2023.1.1" Variant="cuda\" />
45+
<NativeAssemblyReference Include="curand64_10" Variant="cuda\" />
4546
<NativeAssemblyReference Include="cusolver64_11" Variant="cuda\" />
4647
<NativeAssemblyReference Include="cusolverMg64_11" Variant="cuda\" />
4748
<NativeAssemblyReference Include="cusparse64_12" Variant="cuda\" />
4849
<NativeAssemblyReference Include="fbgemm" Variant="cuda\" />
4950
<NativeAssemblyReference Include="fbjni" Variant="cuda\" />
5051
<NativeAssemblyReference Include="libiomp5md" Variant="cuda\" />
5152
<NativeAssemblyReference Include="libiompstubs5md" Variant="cuda\" />
52-
<NativeAssemblyReference Include="nvrtc64_120_0" Variant="cuda\" />
53-
<NativeAssemblyReference Include="nvrtc-builtins64_121" Variant="cuda\" />
54-
<NativeAssemblyReference Include="nvToolsExt64_1" Variant="cuda\" />
5553
<NativeAssemblyReference Include="nvJitLink_120_0" Variant="cuda\" />
54+
<NativeAssemblyReference Include="nvToolsExt64_1" Variant="cuda\" />
55+
<NativeAssemblyReference Include="nvrtc-builtins64_121" Variant="cuda\" />
56+
<NativeAssemblyReference Include="nvrtc64_120_0" Variant="cuda\" />
5657
<NativeAssemblyReference Include="pytorch_jni" Variant="cuda\" />
5758
<NativeAssemblyReference Include="torch" Variant="cuda\" />
5859
<NativeAssemblyReference Include="torch_cpu" Variant="cuda\" />
@@ -81,10 +82,14 @@
8182
<!-- Mac arm64 libtorch binary list used for examples and testing -->
8283
<ItemGroup Condition="'$(NativeTargetArchitecture)' == 'arm64'and $([MSBuild]::IsOSPlatform('osx')) and '$(TestUsesLibTorch)' == 'true' and '$(SkipNative)' != 'true' ">
8384
<NativeAssemblyReference Include="c10" />
85+
<NativeAssemblyReference Include="fbjni" />
86+
<NativeAssemblyReference Include="omp" />
87+
<NativeAssemblyReference Include="pytorch_jni" />
8488
<NativeAssemblyReference Include="shm" />
8589
<NativeAssemblyReference Include="torch" />
8690
<NativeAssemblyReference Include="torch_cpu" />
8791
<NativeAssemblyReference Include="torch_global_deps" />
92+
<NativeAssemblyReference Include="torch_python" />
8893
</ItemGroup>
8994

9095
<!-- Linux CPU libtorch binary list used for examples and testing -->

README.md

+4-3
Original file line numberDiff line numberDiff line change
@@ -11,9 +11,10 @@
1111
<br/>
1212
Please check the [Release Notes](RELEASENOTES.md) file for news on what's been updated in each new release.
1313

14-
__TorchSharp is now in the .NET Foundation!__
1514

16-
If you are using TorchSharp from NuGet, you should be using a version >= 0.98.3 of TorchSharp, and >= 1.12.0 of the libtorch-xxx redistributable packages. We recommend using one of the 'bundled' packages: TorchSharp-cpu, TorchSharp-cuda-windows, or TorchSharp-cuda-linux. They will pull in the right LibTorch backends.
15+
__TorchSharp no longer supports MacOS on Intel hardware.__
16+
17+
With libtorch release 2.4.0, Intel HW support was deprecated for libtorch. This means that the last version of TorchSharp to work on Intel Macintosh hardware is 0.102.8. Starting with 0.103.0, only Macs based on Apple Silicon are supported.
1718

1819
__TorchSharp examples has their own home!__
1920

@@ -105,7 +106,7 @@ Otherwise, you also need one of the LibTorch backend packages: https://www.nuget
105106

106107
* `libtorch-cpu-win-x64` (CPU, Windows)
107108

108-
* `libtorch-cpu-osx-x64` (CPU, OSX)
109+
* `libtorch-cpu-osx-arm64` (CPU, OSX)
109110

110111
* `libtorch-cpu` (CPU, references all three, larger download but simpler)
111112

RELEASENOTES.md

+4
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,10 @@
22

33
Releases, starting with 9/2/2021, are listed with the most recent release at the top.
44

5+
# NuGet Version 0.103.0
6+
7+
Move to libtorch 2.4.0.
8+
59
# NuGet Version 0.102.8
610

711
__Bug Fixes__:

0 commit comments

Comments
 (0)