Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions lib/Conversion/TorchToTosa/TorchToTosa.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -6454,6 +6454,11 @@ class ConvertAtenConstPatternOp : public OpConversionPattern<AtenOpT> {
for (auto s : shape)
size *= s;

if (size == 0) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for fixing this, Hari. Do you think it makes sense to create an issue for the upstream if it is legal for tensor to have zero dimensions?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change should be made in the upstream torch-mlir repo directly instead of our forked repo. Generally, the goal is to keep the divergence with the upstream repo as minimal as possible.

Given that the current tosa pipeline produces invalid tosa.const op we can fix the issue and lock it down using the same torch-to-tosa-backend test you've added here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have time to have it accepted upstream and merged with torch-mlir? Also how do we lock down the torch-to-tosa-linalg test upstream, I feel like these test tasks will just be lost.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have time to have it accepted upstream and merged with torch-mlir?

Maybe, it's a simple enough change. In any case, we probably will update torch-mlir during LCM.

we lock down the torch-to-tosa-linalg test upstream

We don't for this particular case. There is already a test in this repo that locks down that bailing out of tosa lowering for a particular unsupported op will trigger the linalg lowering in our pipeline. That is enough to lock down the our pipeline. We don't need to add test for all such unsupported ops.

return rewriter.notifyMatchFailure(op,
"Shape must not have zero dimensions");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: The error message can say "Shape must not have a dimension of size zero"

}

SmallVector<int32_t> values(size, fillVal);
auto constOp =
tosa::getConstTensor<int32_t>(rewriter, op, values, shape).value();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -146,3 +146,17 @@ func.func @torch.aten.size.int(%arg0: !torch.vtensor<[4,2],f32>) -> !torch.int {
%0 = torch.aten.size.int %arg0, %c2 : !torch.vtensor<[4,2],f32>, !torch.int -> !torch.int
return %0 : !torch.int
}

// -----
func.func @torch.aten.empty.memory_format() -> !torch.vtensor<[1,0,256],f32>{
%c1 = torch.constant.int 1
%c0 = torch.constant.int 0
%c256 = torch.constant.int 256
%2452 = torch.prim.ListConstruct %c1, %c0, %c256 : (!torch.int, !torch.int, !torch.int) -> !torch.list<int>
%none = torch.constant.none
%cpu = torch.constant.device "cpu"
%false = torch.constant.bool false
// expected-error @below {{failed to legalize operation 'torch.aten.empty.memory_format' that was explicitly marked illegal}}
%out = torch.aten.empty.memory_format %2452, %none, %none, %cpu, %false, %none : !torch.list<int>, !torch.none, !torch.none, !torch.Device, !torch.bool, !torch.none -> !torch.vtensor<[1,0,256],f32>
return %out : !torch.vtensor<[1,0,256],f32>
}
Original file line number Diff line number Diff line change
Expand Up @@ -53,3 +53,20 @@ func.func @tm_scan(%arg0: tensor<1x512xi64>) -> (tensor<1x512xi64>, tensor<1xi64
} -> tensor<1x512xi64>, tensor<1xi64>
return %2#0, %2#1 : tensor<1x512xi64>, tensor<1xi64>
}

//-----
// CHECK-LABEL: func.func @torch.aten.empty.memory_format() -> tensor<1x0x256xf32> {
// CHECK: %[[EMPTY_TENSOR:.*]] = tensor.empty() : tensor<1x0x256xf32>
// CHECK: return %[[EMPTY_TENSOR]] : tensor<1x0x256xf32>
// CHECK: }
func.func @torch.aten.empty.memory_format() -> !torch.vtensor<[1,0,256],f32>{
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the empty tensor is expected to be lowered later to linalg, an e2e test can help lock down the expected behavior

%c1 = torch.constant.int 1
%c0 = torch.constant.int 0
%c256 = torch.constant.int 256
%2452 = torch.prim.ListConstruct %c1, %c0, %c256 : (!torch.int, !torch.int, !torch.int) -> !torch.list<int>
%none = torch.constant.none
%cpu = torch.constant.device "cpu"
%false = torch.constant.bool false
%out = torch.aten.empty.memory_format %2452, %none, %none, %cpu, %false, %none : !torch.list<int>, !torch.none, !torch.none, !torch.Device, !torch.bool, !torch.none -> !torch.vtensor<[1,0,256],f32>
return %out : !torch.vtensor<[1,0,256],f32>
}
Loading