Skip to content

Fix Conv1d w8a32 operator (#16607)#16607

Open
mgiordy wants to merge 1 commit intopytorch:mainfrom
mgiordy:export-D89863750
Open

Fix Conv1d w8a32 operator (#16607)#16607
mgiordy wants to merge 1 commit intopytorch:mainfrom
mgiordy:export-D89863750

Conversation

@mgiordy
Copy link
Copy Markdown
Contributor

@mgiordy mgiordy commented Jan 14, 2026

Summary:

Summary

This diff fixes the Conv1d w8a32 operator by adding a transformation to the val attribute of the other_inputs[0].meta dictionary. Specifically, the permute operation is applied to the original_val tensor with the fake_mode context, and the resulting transposed_val is assigned to transposed_inputs.meta["val"].

Reviewed By: mcremon-meta

Differential Revision: D89863750

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot bot commented Jan 14, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/16607

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (2 Unrelated Failures)

As of commit 81fc23b with merge base e638059 (image):

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jan 14, 2026
@meta-codesync
Copy link
Copy Markdown
Contributor

meta-codesync bot commented Jan 14, 2026

@mgiordy has exported this pull request. If you are a Meta employee, you can view the originating Diff in D89863750.

@github-actions
Copy link
Copy Markdown

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Copilot AI review requested due to automatic review settings January 29, 2026 14:49
mgiordy pushed a commit to mgiordy/executorch that referenced this pull request Jan 29, 2026
Summary:

#### Summary

This diff fixes the Conv1d w8a32 operator by adding a transformation to the `val` attribute of the `other_inputs[0].meta` dictionary. Specifically, the `permute` operation is applied to the `original_val` tensor with the `fake_mode` context, and the resulting `transposed_val` is assigned to `transposed_inputs.meta["val"]`.

Reviewed By: mcremon-meta

Differential Revision: D89863750
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR fixes the Conv1d w8a32 operator by adding metadata propagation for transposed tensors and adding input validation to prevent unsupported configurations.

Changes:

  • Added metadata propagation for the val attribute when creating transposed inputs and weights in the Conv1d w8a32 operator
  • Added validation in patterns.py to bail early when input length doesn't equal kernel size (marked as "not yet supported")

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.

File Description
backends/cadence/aot/quantizer/fusion_pass.py Adds proper fake_mode-aware metadata propagation for transposed_inputs and transposed_weights when transforming Conv1d tensors from NCL to NLC format
backends/cadence/aot/quantizer/patterns.py Adds validation to reject Conv1d operations where input length doesn't equal kernel size (3), marking this configuration as not yet supported

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +657 to +664
# Bail if length != kernel size - Not yet supported
if inputs_shape[-1] != cnn_weights_shape[2]:
return (
PartitionAnchors(
empty=True,
),
conv_layer,
)
Copy link

Copilot AI Jan 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This check restricts the w8a32_conv pattern to only match when the input length equals the kernel size (3). While the comment indicates this is intentionally not yet supported, this is quite restrictive. Standard convolution operations typically support input lengths greater than or equal to the kernel size. The reference implementation in ref_implementations.py (lines 926-970) and the test in test_ref_implementations.py (lines 1156-1166 show length=5 with kernel=3) both support arbitrary input lengths. Consider whether this restriction is necessary, or if it should be relaxed to allow input_length >= kernel_size to enable the optimization in more cases.

Copilot uses AI. Check for mistakes.
Comment on lines +435 to +468
# Propagate val metadata for transposed_inputs
if "val" in other_inputs[0].meta:
original_val = other_inputs[0].meta["val"]
fake_mode = original_val.fake_mode
if fake_mode is not None:
with fake_mode:
transposed_val = torch.ops.aten.permute.default(
original_val, [0, 2, 1]
)
transposed_inputs.meta["val"] = transposed_val
else:
transposed_inputs.meta["val"] = torch.ops.aten.permute.default(
original_val, [0, 2, 1]
)
copy_node_metadata(transposed_inputs, other_inputs[0])

transposed_weights = graph_module.graph.call_function(
torch.ops.aten.permute.default,
(weights_inputs[0], [2, 0, 1]), # NCL -> LNC
)
# Propagate val metadata for transposed_weights
if "val" in weights_inputs[0].meta:
original_val = weights_inputs[0].meta["val"]
fake_mode = original_val.fake_mode
if fake_mode is not None:
with fake_mode:
transposed_val = torch.ops.aten.permute.default(
original_val, [2, 0, 1]
)
transposed_weights.meta["val"] = transposed_val
else:
transposed_weights.meta["val"] = torch.ops.aten.permute.default(
original_val, [2, 0, 1]
)
Copy link

Copilot AI Jan 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The metadata propagation logic for transposed_inputs (lines 435-448) and transposed_weights (lines 455-468) is duplicated with only minor variations. This pattern also appears elsewhere in the codebase (e.g., lines 164-176, 189-200, 376-385, 653-671). Consider extracting this into a helper function to reduce code duplication and improve maintainability. The helper function could take parameters like the node, transformation operation, and transformation arguments.

Copilot uses AI. Check for mistakes.
mgiordy pushed a commit to mgiordy/executorch that referenced this pull request Feb 4, 2026
Summary:

#### Summary

This diff fixes the Conv1d w8a32 operator by adding a transformation to the `val` attribute of the `other_inputs[0].meta` dictionary. Specifically, the `permute` operation is applied to the `original_val` tensor with the `fake_mode` context, and the resulting `transposed_val` is assigned to `transposed_inputs.meta["val"]`.

Reviewed By: mcremon-meta

Differential Revision: D89863750
mgiordy pushed a commit to mgiordy/executorch that referenced this pull request Feb 4, 2026
Summary:

#### Summary

This diff fixes the Conv1d w8a32 operator by adding a transformation to the `val` attribute of the `other_inputs[0].meta` dictionary. Specifically, the `permute` operation is applied to the `original_val` tensor with the `fake_mode` context, and the resulting `transposed_val` is assigned to `transposed_inputs.meta["val"]`.

Reviewed By: mcremon-meta

Differential Revision: D89863750
@meta-codesync meta-codesync bot changed the title Fix Conv1d w8a32 operator Fix Conv1d w8a32 operator (#16607) Apr 5, 2026
@mgiordy mgiordy force-pushed the export-D89863750 branch from 7850292 to 24f179a Compare April 5, 2026 10:34
mgiordy pushed a commit to mgiordy/executorch that referenced this pull request Apr 5, 2026
Summary:

#### Summary

This diff fixes the Conv1d w8a32 operator by adding a transformation to the `val` attribute of the `other_inputs[0].meta` dictionary. Specifically, the `permute` operation is applied to the `original_val` tensor with the `fake_mode` context, and the resulting `transposed_val` is assigned to `transposed_inputs.meta["val"]`.

Reviewed By: mcremon-meta

Differential Revision: D89863750
mgiordy pushed a commit to mgiordy/executorch that referenced this pull request Apr 5, 2026
Summary:
Pull Request resolved: pytorch#16607

#### Summary

This diff fixes the Conv1d w8a32 operator by adding a transformation to the `val` attribute of the `other_inputs[0].meta` dictionary. Specifically, the `permute` operation is applied to the `original_val` tensor with the `fake_mode` context, and the resulting `transposed_val` is assigned to `transposed_inputs.meta["val"]`.

Reviewed By: mcremon-meta

Differential Revision: D89863750
@mgiordy mgiordy force-pushed the export-D89863750 branch from 24f179a to d870e83 Compare April 5, 2026 10:38
Copilot AI review requested due to automatic review settings April 8, 2026 08:14
@mgiordy mgiordy force-pushed the export-D89863750 branch from d870e83 to 54e7f94 Compare April 8, 2026 08:14
mgiordy pushed a commit to mgiordy/executorch that referenced this pull request Apr 8, 2026
Summary:

#### Summary

This diff fixes the Conv1d w8a32 operator by adding a transformation to the `val` attribute of the `other_inputs[0].meta` dictionary. Specifically, the `permute` operation is applied to the `original_val` tensor with the `fake_mode` context, and the resulting `transposed_val` is assigned to `transposed_inputs.meta["val"]`.

Reviewed By: mcremon-meta

Differential Revision: D89863750
mgiordy pushed a commit to mgiordy/executorch that referenced this pull request Apr 8, 2026
Summary:
Pull Request resolved: pytorch#16607

#### Summary

This diff fixes the Conv1d w8a32 operator by adding a transformation to the `val` attribute of the `other_inputs[0].meta` dictionary. Specifically, the `permute` operation is applied to the `original_val` tensor with the `fake_mode` context, and the resulting `transposed_val` is assigned to `transposed_inputs.meta["val"]`.

Reviewed By: mcremon-meta

Differential Revision: D89863750
@mgiordy mgiordy force-pushed the export-D89863750 branch from 54e7f94 to f01d1a1 Compare April 8, 2026 08:17
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +750 to +751
# Bail if length != kernel size - Not yet supported
if inputs_shape[-1] != cnn_weights_shape[2]:
Copy link

Copilot AI Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new input-length guard inputs_shape[-1] != cnn_weights_shape[2] looks incorrect for Conv1d: inputs_shape[-1] is the sequence length (L), while cnn_weights_shape[2] is the kernel size (K=3). cadence::quantized_w8a32_conv (and its meta/ref implementations) support L > K (output length is L - K + 1), and existing tests cover L=5, K=3. This condition would incorrectly bail out for normal Conv1d inputs and prevent the pattern from ever matching.

Consider removing this guard, or (if needed) only bailing when L < K (or other truly unsupported cases).

Suggested change
# Bail if length != kernel size - Not yet supported
if inputs_shape[-1] != cnn_weights_shape[2]:
# Bail only when the input length is smaller than the kernel size.
# Conv1d supports input lengths greater than the kernel size.
if inputs_shape[-1] < cnn_weights_shape[2]:

Copilot uses AI. Check for mistakes.
Comment on lines +747 to +750
inputs = conv_layer.args[0]
if "tensor_meta" in inputs.meta:
inputs_shape = inputs.meta["tensor_meta"].shape
# Bail if length != kernel size - Not yet supported
Copy link

Copilot AI Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This new shape-validation block is gated by if hasattr(cnn_weights.meta, "tensor_meta") above. Since fx.Node.meta is a dict, hasattr(..., "tensor_meta") will always be false, so none of the weight/input shape checks (including the new input-length check) will run.

Use a dict-key check (e.g., "tensor_meta" in cnn_weights.meta) so the validations actually execute when metadata is available.

Copilot uses AI. Check for mistakes.
Comment on lines +441 to +450
# Propagate val metadata for transposed_inputs
if "val" in other_inputs[0].meta:
original_val = other_inputs[0].meta["val"]
fake_mode = original_val.fake_mode
if fake_mode is not None:
with fake_mode:
transposed_val = torch.ops.aten.permute.default(
original_val, [0, 2, 1]
)
transposed_inputs.meta["val"] = transposed_val
Copy link

Copilot AI Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

get_args_and_kwargs_mixed_w8a32_conv now conditionally propagates meta["val"] for the inserted permute nodes (and adds a fake_mode fallback). There doesn't appear to be any unit/integration test coverage exercising QuantFusion on a Conv1d->quantized_w8a32_conv rewrite, so regressions here (e.g., missing/incorrect meta causing later passes to fail) may go unnoticed.

Add a small test that runs QuantFusion on a minimal Conv1d graph and asserts the resulting graph contains the expected permutes + cadence::quantized_w8a32_conv, and that the graph can run FakeTensor/meta propagation without errors.

Copilot uses AI. Check for mistakes.
mgiordy pushed a commit to mgiordy/executorch that referenced this pull request Apr 8, 2026
Summary:

#### Summary

This diff fixes the Conv1d w8a32 operator by adding a transformation to the `val` attribute of the `other_inputs[0].meta` dictionary. Specifically, the `permute` operation is applied to the `original_val` tensor with the `fake_mode` context, and the resulting `transposed_val` is assigned to `transposed_inputs.meta["val"]`.

Reviewed By: mcremon-meta

Differential Revision: D89863750
@mgiordy mgiordy force-pushed the export-D89863750 branch from f01d1a1 to 065e5a9 Compare April 8, 2026 10:59
mgiordy pushed a commit to mgiordy/executorch that referenced this pull request Apr 8, 2026
Summary:
Pull Request resolved: pytorch#16607

#### Summary

This diff fixes the Conv1d w8a32 operator by adding a transformation to the `val` attribute of the `other_inputs[0].meta` dictionary. Specifically, the `permute` operation is applied to the `original_val` tensor with the `fake_mode` context, and the resulting `transposed_val` is assigned to `transposed_inputs.meta["val"]`.

Reviewed By: mcremon-meta

Differential Revision: D89863750
@mgiordy mgiordy force-pushed the export-D89863750 branch from 065e5a9 to 51c81d0 Compare April 8, 2026 11:03
mgiordy pushed a commit to mgiordy/executorch that referenced this pull request Apr 8, 2026
Summary:

#### Summary

This diff fixes the Conv1d w8a32 operator by adding a transformation to the `val` attribute of the `other_inputs[0].meta` dictionary. Specifically, the `permute` operation is applied to the `original_val` tensor with the `fake_mode` context, and the resulting `transposed_val` is assigned to `transposed_inputs.meta["val"]`.

Reviewed By: mcremon-meta

Differential Revision: D89863750
@mgiordy mgiordy force-pushed the export-D89863750 branch from 51c81d0 to 5132ba0 Compare April 8, 2026 13:17
mgiordy pushed a commit to mgiordy/executorch that referenced this pull request Apr 8, 2026
Summary:

#### Summary

This diff fixes the Conv1d w8a32 operator by adding a transformation to the `val` attribute of the `other_inputs[0].meta` dictionary. Specifically, the `permute` operation is applied to the `original_val` tensor with the `fake_mode` context, and the resulting `transposed_val` is assigned to `transposed_inputs.meta["val"]`.

Reviewed By: mcremon-meta

Differential Revision: D89863750
Copilot AI review requested due to automatic review settings April 8, 2026 15:36
@mgiordy mgiordy force-pushed the export-D89863750 branch from 5132ba0 to b96a064 Compare April 8, 2026 15:36
Summary:
Pull Request resolved: pytorch#16607

#### Summary

This diff fixes the Conv1d w8a32 operator by adding a transformation to the `val` attribute of the `other_inputs[0].meta` dictionary. Specifically, the `permute` operation is applied to the `original_val` tensor with the `fake_mode` context, and the resulting `transposed_val` is assigned to `transposed_inputs.meta["val"]`.

Reviewed By: mcremon-meta

Differential Revision: D89863750
@mgiordy mgiordy force-pushed the export-D89863750 branch from b96a064 to 81fc23b Compare April 8, 2026 15:40
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +747 to +758
inputs = conv_layer.args[0]
if "tensor_meta" in inputs.meta:
inputs_shape = inputs.meta["tensor_meta"].shape
# Bail if length != kernel size - Not yet supported
if inputs_shape[-1] != cnn_weights_shape[2]:
return (
PartitionAnchors(
empty=True,
),
conv_layer,
)

Copy link

Copilot AI Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new anchor-bail condition inputs_shape[-1] != cnn_weights_shape[2] incorrectly restricts quantized_w8a32 Conv1d fusion to cases where input length equals kernel size (3). The operator’s fake/meta kernel and reference implementation support general input lengths (output length in_length - kernel + 1), and existing tests exercise in_length=5 with kernel=3 (see backends/cadence/aot/tests/test_ref_implementations.py:1170+). This check will prevent valid fusions and likely regress model coverage; it should be removed or replaced with the actual supported constraint (if any).

Suggested change
inputs = conv_layer.args[0]
if "tensor_meta" in inputs.meta:
inputs_shape = inputs.meta["tensor_meta"].shape
# Bail if length != kernel size - Not yet supported
if inputs_shape[-1] != cnn_weights_shape[2]:
return (
PartitionAnchors(
empty=True,
),
conv_layer,
)

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants