-
Notifications
You must be signed in to change notification settings - Fork 0
Lower AtenEmptyMemoryFormat with zero dimensions through tosa-linalg pipeline #6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: mw_main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -6454,6 +6454,11 @@ class ConvertAtenConstPatternOp : public OpConversionPattern<AtenOpT> { | |
| for (auto s : shape) | ||
| size *= s; | ||
|
|
||
| if (size == 0) { | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This change should be made in the upstream Given that the current tosa pipeline produces invalid tosa.const op we can fix the issue and lock it down using the same
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Do we have time to have it accepted upstream and merged with
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Maybe, it's a simple enough change. In any case, we probably will update torch-mlir during LCM.
We don't for this particular case. There is already a test in this repo that locks down that bailing out of tosa lowering for a particular unsupported op will trigger the linalg lowering in our pipeline. That is enough to lock down the our pipeline. We don't need to add test for all such unsupported ops. |
||
| return rewriter.notifyMatchFailure(op, | ||
| "Shape must not have zero dimensions"); | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. nit: The error message can say "Shape must not have a dimension of size zero" |
||
| } | ||
|
|
||
| SmallVector<int32_t> values(size, fillVal); | ||
| auto constOp = | ||
| tosa::getConstTensor<int32_t>(rewriter, op, values, shape).value(); | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -53,3 +53,20 @@ func.func @tm_scan(%arg0: tensor<1x512xi64>) -> (tensor<1x512xi64>, tensor<1xi64 | |
| } -> tensor<1x512xi64>, tensor<1xi64> | ||
| return %2#0, %2#1 : tensor<1x512xi64>, tensor<1xi64> | ||
| } | ||
|
|
||
| //----- | ||
| // CHECK-LABEL: func.func @torch.aten.empty.memory_format() -> tensor<1x0x256xf32> { | ||
| // CHECK: %[[EMPTY_TENSOR:.*]] = tensor.empty() : tensor<1x0x256xf32> | ||
| // CHECK: return %[[EMPTY_TENSOR]] : tensor<1x0x256xf32> | ||
| // CHECK: } | ||
| func.func @torch.aten.empty.memory_format() -> !torch.vtensor<[1,0,256],f32>{ | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If the empty tensor is expected to be lowered later to linalg, an e2e test can help lock down the expected behavior |
||
| %c1 = torch.constant.int 1 | ||
| %c0 = torch.constant.int 0 | ||
| %c256 = torch.constant.int 256 | ||
| %2452 = torch.prim.ListConstruct %c1, %c0, %c256 : (!torch.int, !torch.int, !torch.int) -> !torch.list<int> | ||
| %none = torch.constant.none | ||
| %cpu = torch.constant.device "cpu" | ||
| %false = torch.constant.bool false | ||
| %out = torch.aten.empty.memory_format %2452, %none, %none, %cpu, %false, %none : !torch.list<int>, !torch.none, !torch.none, !torch.Device, !torch.bool, !torch.none -> !torch.vtensor<[1,0,256],f32> | ||
| return %out : !torch.vtensor<[1,0,256],f32> | ||
| } | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for fixing this, Hari. Do you think it makes sense to create an issue for the upstream if it is legal for tensor to have zero dimensions?