Skip to content

Commit 7398319

Browse files
author
Niklas Gustafsson
committed
Removing the '{Float|Int|Complex|Bool}Tensor' static tensor factory classes.
All factory methods are in 'torch' now.
1 parent 8f8c52f commit 7398319

23 files changed

Lines changed: 1059 additions & 9234 deletions

CONTRIBUTING.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,7 @@ If you send us a PR, whether for documentation, examples, or library code, we re
2525
* **DO** refer to any relevant issues, and include [keywords](https://help.github.com/articles/closing-issues-via-commit-messages/) that automatically close issues when the PR is merged.
2626
* **DO** tag any users that should know about and/or review the change.
2727
* **DO** ensure each commit successfully builds. The entire PR must pass all tests in the Continuous Integration (CI) system before it'll be merged.
28+
* **DO** add a brief description to the RELEASENOTES.md file at the top under the heading of the upcoming release.
2829
* **DO** address PR feedback in an additional commit(s) rather than amending the existing commits, and only rebase/squash them when necessary. This makes it easier for reviewers to track changes.
2930
* **DO** assume that ["Squash and Merge"](https://github.com/blog/2141-squash-your-commits) will be used to merge your commit unless you request otherwise in the PR.
3031
* **DO NOT** fix merge conflicts using a merge commit. Prefer `git rebase`.

DEVGUIDE.md

Lines changed: 10 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -70,23 +70,26 @@ To change the TorchSharp package version update this [file](https://github.com/d
7070
The TorchSharp package is pushed to nuget.org via Azure DevOps CI release pipeline. Assuming you're not building or updating the LibTorch packages
7171
(`BuildLibTorchPackages` is `false` in [azure-pipelines.yml](azure-pipelines.yml)) this is pretty simple once you have the permissions:
7272

73-
1. Integrate code to main and wait for CI to process
74-
2. Go to [releases](https://donsyme.visualstudio.com/TorchSharp/_release) and choose "Create Release" (top right)
75-
3. Under "Artifacts-->Version" choose the pipeline build corresponding to the thing you want to release. It should be a successful build on main
76-
4. Press "Create"
73+
1. Update the version number in [./build/BranchInfo.props](./build/BranchInfo.props) and in the [Release Notes](./RELEASENOTES.md) file and then submit a PR.
7774

78-
The package version is currently taken from the CI build version.
75+
Updating the major or minor version number should only be done after a discussion with repo admins. The patch number should be incremented by one each release and set to zero after a change to the major or minor version.
76+
2. Integrate code to main and wait for CI to process
77+
3. Go to [releases](https://donsyme.visualstudio.com/TorchSharp/_release) and choose "Create Release" (top right)
78+
4. Under "Artifacts-->Version" choose the pipeline build corresponding to the thing you want to release. It should be a successful build on main
79+
5. Press "Create"
80+
81+
6. Once the package has been successfully pushed and is available in the NuGet gallery, create a GitHub tag in the 'main' branch with the version as the name of the tag.
7982

8083
# The libtorch packages
8184

8285
The libtorch packages are huge (~3GB compressed combined for CUDA Windows) and cause a
83-
lot of problems to make and deliver due to nuget package size restrictions.
86+
lot of problems to make and deliver due to NuGet package size restrictions.
8487

8588
These problems include:
8689

8790
1. A massive 2GB binary in the linux CUDA package and multiple 1.0GB binaries in Windows CUDA package
8891

89-
2. Size limitations of about ~500MB on nuget packages on the Azure DevOps CI system and about ~250MB on `nuget.org`
92+
2. Size limitations of about ~500MB on NuGet packages on the Azure DevOps CI system and about ~250MB on `nuget.org`
9093

9194
4. Regular download/upload failures on these systems due to network interruptions for packages of this size
9295

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,10 @@ __Fixed Bugs:__
1111
Fixed incorrectly implemented Module APIs related to parameter / module registration.
1212
#353 Missing torch.minimum (with an alternative raising exception)
1313

14+
__API Changes:__
15+
16+
Removed the type-named tensor factories, such as 'Int32Tensor.rand(),' etc.
17+
1418
### NuGet Version 0.92.52220
1519

1620
This was the first release since moving TorchSharp to the .NET Foundation organization. Most of the new functionality is related to continuing the API changes that were started in the previous release, and fixing some bugs.

src/Examples/Program.cs

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -11,11 +11,11 @@ public static class Program
1111
{
1212
public static void Main(string[] args)
1313
{
14-
//MNIST.Main(args);
15-
//AdversarialExampleGeneration.Main(args);
14+
MNIST.Main(args);
15+
AdversarialExampleGeneration.Main(args);
1616
CIFAR10.Main(args);
17-
//SequenceToSequence.Main(args);
18-
//TextClassification.Main(args);
17+
SequenceToSequence.Main(args);
18+
TextClassification.Main(args);
1919
//ImageTransforms.Main(args);
2020
}
2121
}

src/FSharp.Examples/SequenceToSequence.fs

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -52,11 +52,11 @@ type PositionalEncoding(dmodel, maxLen) as this =
5252
inherit Module("PositionalEncoding")
5353

5454
let dropout = Dropout(dropout)
55-
let mutable pe = torch.Float32Tensor.zeros([| maxLen; dmodel|])
55+
let mutable pe = torch.zeros([| maxLen; dmodel|])
5656

5757
do
58-
let position = torch.Float32Tensor.arange(0L.ToScalar(), maxLen.ToScalar(), 1L.ToScalar()).unsqueeze(1L)
59-
let divTerm = (torch.Float32Tensor.arange(0L.ToScalar(), dmodel.ToScalar(), 2L.ToScalar()) * (-Math.Log(10000.0) / (float dmodel)).ToScalar()).exp()
58+
let position = torch.arange(0L.ToScalar(), maxLen.ToScalar(), 1L.ToScalar()).unsqueeze(1L)
59+
let divTerm = (torch.arange(0L.ToScalar(), dmodel.ToScalar(), 2L.ToScalar()) * (-Math.Log(10000.0) / (float dmodel)).ToScalar()).exp()
6060

6161
let NULL = System.Nullable<int64>()
6262

@@ -105,9 +105,9 @@ type TransformerModel(ntokens, device:torch.Device) as this =
105105
decoder.forward(enc)
106106

107107
member _.GenerateSquareSubsequentMask(size:int64) =
108-
use mask = torch.Float32Tensor.ones([|size;size|]).eq(torch.Float32Tensor.from(1.0f)).triu().transpose(0L,1L)
109-
use maskIsZero = mask.eq(torch.Float32Tensor.from(0.0f))
110-
use maskIsOne = mask.eq(torch.Float32Tensor.from(1.0f))
108+
use mask = torch.ones([|size;size|]).eq(torch.tensor(1.0f)).triu().transpose(0L,1L)
109+
use maskIsZero = mask.eq(torch.tensor(0.0f))
110+
use maskIsOne = mask.eq(torch.tensor(1.0f))
111111
mask.to_type(torch.float32)
112112
.masked_fill(maskIsZero, Single.NegativeInfinity.ToScalar())
113113
.masked_fill(maskIsOne, 0.0f.ToScalar()).``to``(device)
@@ -117,7 +117,7 @@ let process_input (iter:string seq) (tokenizer:string->string seq) (vocab:TorchT
117117
[|
118118
for item in iter do
119119
let itemData = [| for token in tokenizer(item) do (int64 vocab.[token]) |]
120-
let t = torch.Int64Tensor.from(itemData)
120+
let t = torch.tensor(itemData)
121121
if t.NumberOfElements > 0L then
122122
t
123123
|], 0L)

src/TorchSharp/NN/Losses.cs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -421,7 +421,7 @@ public static GaussianNLLLoss gaussian_nll_loss(bool full = false, float eps = 1
421421

422422
if ((variance < 0).any().DataItem<bool>()) throw new ArgumentException("variance has negative entry/entries");
423423

424-
variance = variance.clone().maximum(Float32Tensor.from(eps));
424+
variance = variance.clone().maximum(torch.tensor(eps));
425425

426426
var loss = 0.5 * (variance.log() + (input - target).square() / variance).view(input.shape[0], -1).sum(dimensions: new long[] { 1 });
427427

src/TorchSharp/Tensor/Tensor.Factories.cs

Lines changed: 82 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -35,8 +35,18 @@ public static partial class torch
3535

3636
/// <summary>
3737
/// Creates 1-D tensor of size [(stop - start) / step] with values from interval [start, stop) and
38-
/// common difference step, starting from start
39-
/// </summary>
38+
/// common difference step, starting from start
39+
/// </summary>
40+
/// <param name="start">The starting value for the set of points.</param>
41+
/// <param name="stop">The ending value for the set of points</param>
42+
/// <param name="step">The gap between each pair of adjacent points.</param>
43+
/// <param name="dtype">the desired data type of returned tensor.
44+
/// Default: if null, uses a global default (see torch.set_default_tensor_type()).
45+
/// If dtype is not given, infer the data type from the other input arguments.
46+
/// If any of start, end, or stop are floating-point, the dtype is inferred to be the default dtype, see get_default_dtype().
47+
/// Otherwise, the dtype is inferred to be torch.int64.</param>
48+
/// <param name="device"></param>
49+
/// <param name="requiresGrad"> If autograd should record operations on the returned tensor. Default: false.</param>
4050
static public Tensor arange(Scalar start, Scalar stop, Scalar step, torch.ScalarType? dtype = null, torch.Device device = null, bool requiresGrad = false)
4151
{
4252
device = torch.InitializeDevice(device);
@@ -51,6 +61,13 @@ static public Tensor arange(Scalar start, Scalar stop, Scalar step, torch.Scalar
5161
}
5262
}
5363

64+
if (dtype == ScalarType.ComplexFloat32) {
65+
return ComplexFloat32Tensor.arange(start, stop, step, device, requiresGrad);
66+
}
67+
else if (dtype == ScalarType.ComplexFloat64) {
68+
return ComplexFloat64Tensor.arange(start, stop, step, device, requiresGrad);
69+
}
70+
5471
var handle = THSTensor_arange(start.Handle, stop.Handle, step.Handle, (sbyte)dtype, (int)device.type, device.index, requiresGrad);
5572
if (handle == IntPtr.Zero) {
5673
GC.Collect();
@@ -512,7 +529,6 @@ static public Tensor randint(long max, long[] size, torch.ScalarType? dtype = nu
512529
}
513530

514531
if (dtype == ScalarType.ComplexFloat32) {
515-
516532
return randint_c32(max, size, device, requiresGrad);
517533
}
518534
else if (dtype == ScalarType.ComplexFloat64) {
@@ -848,6 +864,23 @@ public static Tensor tensor(double scalar, torch.ScalarType? dtype = null, torch
848864
return tensor;
849865
}
850866

867+
/// <summary>
868+
/// Create a scalar tensor from a single value
869+
/// </summary>
870+
public static Tensor tensor(float real, float imaginary, torch.ScalarType? dtype = null, torch.Device device = null, bool requiresGrad = false)
871+
{
872+
device = torch.InitializeDevice(device);
873+
var handle = THSTensor_newComplexFloat32Scalar(real, imaginary, (int)device.type, device.index, requiresGrad);
874+
if (handle == IntPtr.Zero) { torch.CheckForErrors(); }
875+
var tensor = new Tensor(handle);
876+
if (device is not null) {
877+
tensor = dtype.HasValue ? tensor.to(dtype.Value, device) : tensor.to(device);
878+
} else if (dtype.HasValue) {
879+
tensor = tensor.to_type(dtype.Value);
880+
}
881+
return tensor;
882+
}
883+
851884
/// <summary>
852885
/// Create a scalar tensor from a single value
853886
/// </summary>
@@ -865,6 +898,40 @@ public static Tensor tensor((float Real, float Imaginary) scalar, torch.ScalarTy
865898
return tensor;
866899
}
867900

901+
/// <summary>
902+
/// Create a scalar tensor from a single value
903+
/// </summary>
904+
public static Tensor tensor((double Real, double Imaginary) scalar, torch.ScalarType? dtype = null, torch.Device device = null, bool requiresGrad = false)
905+
{
906+
device = torch.InitializeDevice(device);
907+
var handle = THSTensor_newComplexFloat64Scalar(scalar.Real, scalar.Imaginary, (int)device.type, device.index, requiresGrad);
908+
if (handle == IntPtr.Zero) { torch.CheckForErrors(); }
909+
var tensor = new Tensor(handle);
910+
if (device is not null) {
911+
tensor = dtype.HasValue ? tensor.to(dtype.Value, device) : tensor.to(device);
912+
} else if (dtype.HasValue) {
913+
tensor = tensor.to_type(dtype.Value);
914+
}
915+
return tensor;
916+
}
917+
918+
/// <summary>
919+
/// Create a scalar tensor from a single value
920+
/// </summary>
921+
public static Tensor tensor(double real, double imaginary, torch.ScalarType? dtype = null, torch.Device device = null, bool requiresGrad = false)
922+
{
923+
device = torch.InitializeDevice(device);
924+
var handle = THSTensor_newComplexFloat64Scalar(real, imaginary, (int)device.type, device.index, requiresGrad);
925+
if (handle == IntPtr.Zero) { torch.CheckForErrors(); }
926+
var tensor = new Tensor(handle);
927+
if (device is not null) {
928+
tensor = dtype.HasValue ? tensor.to(dtype.Value, device) : tensor.to(device);
929+
} else if (dtype.HasValue) {
930+
tensor = tensor.to_type(dtype.Value);
931+
}
932+
return tensor;
933+
}
934+
868935
/// <summary>
869936
/// Create a scalar tensor from a single value
870937
/// </summary>
@@ -1520,7 +1587,11 @@ public static Tensor tensor(IList<double> rawArray, long dim0, long dim1, long d
15201587
public static Tensor tensor(IList<(float Real, float Imaginary)> rawArray, long[] dimensions, torch.ScalarType? dtype = null, torch.Device device = null, bool requiresGrad = false)
15211588
{
15221589
torch.InitializeDeviceType(DeviceType.CPU);
1523-
var dataArray = rawArray.ToArray();
1590+
var dataArray = new float[rawArray.Count * 2];
1591+
for (var i = 0; i < rawArray.Count; i++) {
1592+
dataArray[i * 2] = rawArray[i].Real;
1593+
dataArray[i * 2 + 1] = rawArray[i].Imaginary;
1594+
}
15241595
unsafe {
15251596
var dataHandle = GCHandle.Alloc(dataArray, GCHandleType.Pinned);
15261597
var dataArrayAddr = dataHandle.AddrOfPinnedObject();
@@ -1539,13 +1610,7 @@ public static Tensor tensor(IList<(float Real, float Imaginary)> rawArray, long[
15391610
handle = THSTensor_new(dataArrayAddr, deleter, dimensions, dimensions.Length, (sbyte)ScalarType.ComplexFloat32, requiresGrad);
15401611
}
15411612
if (handle == IntPtr.Zero) { torch.CheckForErrors(); }
1542-
var tensor = new Tensor(handle);
1543-
if (device is not null) {
1544-
tensor = dtype.HasValue ? tensor.to(dtype.Value, device) : tensor.to(device);
1545-
} else if (dtype.HasValue) {
1546-
tensor = tensor.to_type(dtype.Value);
1547-
}
1548-
return tensor;
1613+
return new Tensor(handle);
15491614
}
15501615
}
15511616

@@ -1599,7 +1664,11 @@ public static Tensor tensor(IList<(float Real, float Imaginary)> rawArray, long
15991664
public static Tensor tensor(IList<System.Numerics.Complex> rawArray, long[] dimensions, torch.ScalarType? dtype = null, torch.Device device = null, bool requiresGrad = false)
16001665
{
16011666
torch.InitializeDeviceType(DeviceType.CPU);
1602-
var dataArray = rawArray.ToArray();
1667+
var dataArray = new double[rawArray.Count * 2];
1668+
for (var i = 0; i < rawArray.Count; i++) {
1669+
dataArray[i * 2] = rawArray[i].Real;
1670+
dataArray[i * 2 + 1] = rawArray[i].Imaginary;
1671+
}
16031672
unsafe {
16041673
var dataHandle = GCHandle.Alloc(dataArray, GCHandleType.Pinned);
16051674
var dataArrayAddr = dataHandle.AddrOfPinnedObject();
@@ -1618,13 +1687,7 @@ public static Tensor tensor(IList<System.Numerics.Complex> rawArray, long[] dime
16181687
handle = THSTensor_new(dataArrayAddr, deleter, dimensions, dimensions.Length, (sbyte)ScalarType.ComplexFloat64, requiresGrad);
16191688
}
16201689
if (handle == IntPtr.Zero) { torch.CheckForErrors(); }
1621-
var tensor = new Tensor(handle);
1622-
if (device is not null) {
1623-
tensor = dtype.HasValue ? tensor.to(dtype.Value, device) : tensor.to(device);
1624-
} else if (dtype.HasValue) {
1625-
tensor = tensor.to_type(dtype.Value);
1626-
}
1627-
return tensor;
1690+
return new Tensor(handle);
16281691
}
16291692
}
16301693

src/TorchSharp/Tensor/Tensor.cs

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1243,7 +1243,7 @@ public Tensor take_along_dim(Tensor indices)
12431243
/// <param name="indices">The indices into input. Must have long dtype.</param>
12441244
/// <returns></returns>
12451245
/// <remarks>Functions that return indices along a dimension, like torch.argmax() and torch.argsort(), are designed to work with this function.</remarks>
1246-
public Tensor take_along_dim(IEnumerable<long> indices) => take_along_dim(Int64Tensor.from(indices.ToArray()));
1246+
public Tensor take_along_dim(IEnumerable<long> indices) => take_along_dim(torch.tensor(indices.ToArray()));
12471247

12481248
/// <summary>
12491249
/// Selects values from input at the 1-dimensional indices from indices along the given dim.
@@ -1267,7 +1267,7 @@ public Tensor take_along_dim(Tensor indices, long dimension)
12671267
/// <param name="dim">Dimension to select along.</param>
12681268
/// <returns></returns>
12691269
/// <remarks>Functions that return indices along a dimension, like torch.argmax() and torch.argsort(), are designed to work with this function.</remarks>
1270-
public Tensor take_along_dim(IEnumerable<long> indices, long dim) => take_along_dim(Int64Tensor.from(indices.ToArray()), dim);
1270+
public Tensor take_along_dim(IEnumerable<long> indices, long dim) => take_along_dim(torch.tensor(indices.ToArray()), dim);
12711271

12721272
[DllImport("LibTorchSharp")]
12731273
static extern IntPtr THSTensor_reshape(IntPtr tensor, IntPtr shape, int length);

0 commit comments

Comments
 (0)