Skip to content

Conversation

@RahulC7
Copy link
Contributor

@RahulC7 RahulC7 commented Dec 21, 2025

Copilot AI review requested due to automatic review settings December 21, 2025 22:13
@pytorch-bot
Copy link

pytorch-bot bot commented Dec 21, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/16355

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

❌ 1 New Failure, 1 Unrelated Failure

As of commit afcf441 with merge base 0ee2f49 (image):

NEW FAILURE - The following job has failed:

UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Dec 21, 2025
@meta-codesync
Copy link

meta-codesync bot commented Dec 21, 2025

@RahulC7 has exported this pull request. If you are a Meta employee, you can view the originating Diff in D88898823.

@github-actions
Copy link

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds comprehensive test coverage for three previously untested Cadence quantizers: CadenceWith16BitConvActivationsQuantizer, CadenceWithSoftmaxQuantizer, and CadenceWithLayerNormQuantizer. The tests verify that these quantizers correctly annotate graph nodes with the expected quantization specifications.

  • Removes TODO comments for the three quantizers from the exclusion list
  • Adds four new test cases covering conv1d, conv2d, softmax, and layer_norm operations
  • Implements corresponding graph builder helper methods following the established pattern

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Summary: Add annotation tests for CadenceWith16BitConvActivationsQuantizer covering both conv1d and conv2d operations.

Differential Revision: D88895865
Differential Revision: D88896712
RahulC7 added a commit to RahulC7/executorch that referenced this pull request Dec 22, 2025
RahulC7 added a commit to RahulC7/executorch that referenced this pull request Dec 22, 2025
Copilot AI review requested due to automatic review settings December 22, 2025 20:06
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 1 out of 1 changed files in this pull request and generated 1 comment.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +254 to +277
def _build_layer_norm_graph(self) -> tuple[torch.fx.GraphModule, torch.fx.Node]:
"""Build a simple graph with a layer_norm operation."""
# Input shape: (batch, features)
x = torch.randn(1, 10)
# normalized_shape must match the last dimension(s) of input
normalized_shape = [10]
gm = single_op_builder(
placeholders=(x,),
op=torch.ops.aten.layer_norm.default,
args=(x, normalized_shape),
)

layer_norm_nodes = gm.graph.find_nodes(
op="call_function",
target=torch.ops.aten.layer_norm.default,
)
self.assertEqual(
len(layer_norm_nodes), 1, "Should find exactly one layer_norm node"
)
# Add source_fn_stack metadata required by quantizer pattern matching
layer_norm_nodes[0].meta["source_fn_stack"] = [
("layer_norm", torch.ops.aten.layer_norm.default)
]
return gm, layer_norm_nodes[0]
Copy link

Copilot AI Dec 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The _build_layer_norm_graph method uses a different approach than the other graph builders (single_op_builder instead of GraphBuilder), and manually adds source_fn_stack metadata after graph construction. While this works, it creates an inconsistency in the codebase.

Consider refactoring to use GraphBuilder directly like the other methods for consistency. The normalized_shape parameter can be passed directly in args since GraphBuilder's call_operator handles both tensor and non-tensor arguments. This would allow adding the source_fn_stack metadata during construction rather than after the fact.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants