Adds QAT ConvBN fuse pass to utils#17599
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/17599
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New Failures, 2 Unrelated FailuresAs of commit 05b4b89 with merge base 19e8b68 ( NEW FAILURES - The following jobs have failed:
FLAKY - The following job failed but was likely due to flakiness present on trunk:
BROKEN TRUNK - The following job failed but was present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@JakeStevens has exported this pull request. If you are a Meta employee, you can view the originating Diff in D93904683. |
4eb68eb to
b42fe75
Compare
Summary: Earlier PR adds support for a pass that quantizes the bias resulting from QAT ConvBN fusion without an initial bias. This PR adds it to the NXP calibrate_and_quantize method. Differential Revision: D93904683
larryliu0820
left a comment
There was a problem hiding this comment.
Review automatically exported from Phabricator review in Meta.
Summary: Earlier PR adds support for a pass that quantizes the bias resulting from QAT ConvBN fusion without an initial bias. This PR adds it to the NXP calibrate_and_quantize method. Reviewed By: larryliu0820 Differential Revision: D93904683
b42fe75 to
6e25d3b
Compare
Summary: Earlier PR adds support for a pass that quantizes the bias resulting from QAT ConvBN fusion without an initial bias. This PR adds it to the NXP calibrate_and_quantize method. Reviewed By: larryliu0820 Differential Revision: D93904683
6e25d3b to
94b89eb
Compare
Summary: Pull Request resolved: pytorch#17599 Earlier PR adds support for a pass that quantizes the bias resulting from QAT ConvBN fusion without an initial bias. This PR adds it to the NXP calibrate_and_quantize method. Reviewed By: larryliu0820 Differential Revision: D93904683
94b89eb to
03ae928
Compare
Summary: Earlier PR adds support for a pass that quantizes the bias resulting from QAT ConvBN fusion without an initial bias. This PR adds it to the NXP calibrate_and_quantize method. Reviewed By: larryliu0820 Differential Revision: D93904683
03ae928 to
12dec86
Compare
Summary: Earlier PR adds support for a pass that quantizes the bias resulting from QAT ConvBN fusion without an initial bias. This PR adds it to the NXP calibrate_and_quantize method. Reviewed By: larryliu0820 Differential Revision: D93904683
12dec86 to
fa5020e
Compare
Summary: Pull Request resolved: pytorch#17599 Earlier PR adds support for a pass that quantizes the bias resulting from QAT ConvBN fusion without an initial bias. This PR adds it to the NXP calibrate_and_quantize method. Reviewed By: larryliu0820 Differential Revision: D93904683
c65a4af to
6b01b13
Compare
Summary: Earlier PR adds support for a pass that quantizes the bias resulting from QAT ConvBN fusion without an initial bias. This PR adds it to the NXP calibrate_and_quantize method. Reviewed By: larryliu0820 Differential Revision: D93904683
|
@StrycekSimon please review. |
@StrycekSimon , the internal CI passed on this PR. What failure you refer to? |
| model, input_shape, use_qat=True, use_neutron_for_format_conversion=False | ||
| ).exported_program() | ||
|
|
||
| assert any("lowered_module" in node.name for node in edge_program.graph.nodes) |
There was a problem hiding this comment.
Please change to checking targets. We check for delegate calls and have a util for it.
| assert any("lowered_module" in node.name for node in edge_program.graph.nodes) | |
| assert graph_contains_any_of_ops(edge_program.graph, [torch.ops.higher_order.executorch_call_delegate]) |
Yes this is the intention, exactly. We need to "manually" quantize the bias after we fuse conv bn in QAT |
ba01a32 to
b5e7d02
Compare
Summary: Earlier PR adds support for a pass that quantizes the bias resulting from QAT ConvBN fusion without an initial bias. This PR adds it to the NXP calibrate_and_quantize method. Reviewed By: larryliu0820 Differential Revision: D93904683
| @@ -23,6 +23,8 @@ | |||
| to_quantized_edge_program, | |||
| ) | |||
| from executorch.backends.nxp.tests.executors import OverrideTargetSupportCheck | |||
There was a problem hiding this comment.
| from executorch.backends.nxp.tests.executors import OverrideTargetSupportCheck | |
| from executorch.backends.nxp.tests.executors import ( | |
| graph_contains_any_of_ops, | |
| OverrideTargetSupportCheck, | |
| ) |
There was a problem hiding this comment.
.. to fix the nxp-unittest and linting error
Summary: Earlier PR adds support for a pass that quantizes the bias resulting from QAT ConvBN fusion without an initial bias. This PR adds it to the NXP calibrate_and_quantize method. Reviewed By: larryliu0820 Differential Revision: D93904683
b5e7d02 to
c990590
Compare
Summary: Pull Request resolved: pytorch#17599 Earlier PR adds support for a pass that quantizes the bias resulting from QAT ConvBN fusion without an initial bias. This PR adds it to the NXP calibrate_and_quantize method. Reviewed By: larryliu0820 Differential Revision: D93904683
c990590 to
0584dee
Compare
Summary: Earlier PR adds support for a pass that quantizes the bias resulting from QAT ConvBN fusion without an initial bias. This PR adds it to the NXP calibrate_and_quantize method. Reviewed By: larryliu0820 Differential Revision: D93904683
0584dee to
f1bf383
Compare
Summary: Pull Request resolved: pytorch#17599 Earlier PR adds support for a pass that quantizes the bias resulting from QAT ConvBN fusion without an initial bias. This PR adds it to the NXP calibrate_and_quantize method. Reviewed By: larryliu0820 Differential Revision: D93904683
f1bf383 to
82feea6
Compare
Summary: Earlier PR adds support for a pass that quantizes the bias resulting from QAT ConvBN fusion without an initial bias. This PR adds it to the NXP calibrate_and_quantize method. Reviewed By: larryliu0820 Differential Revision: D93904683
82feea6 to
b08fa33
Compare
Summary: Pull Request resolved: pytorch#17599 Earlier PR adds support for a pass that quantizes the bias resulting from QAT ConvBN fusion without an initial bias. This PR adds it to the NXP calibrate_and_quantize method. Reviewed By: larryliu0820 Differential Revision: D93904683
b08fa33 to
c3404a6
Compare
Summary: Earlier PR adds support for a pass that quantizes the bias resulting from QAT ConvBN fusion without an initial bias. This PR adds it to the NXP calibrate_and_quantize method. Reviewed By: larryliu0820 Differential Revision: D93904683
c3404a6 to
d11e2d6
Compare
Summary: Pull Request resolved: pytorch#17599 Earlier PR adds support for a pass that quantizes the bias resulting from QAT ConvBN fusion without an initial bias. This PR adds it to the NXP calibrate_and_quantize method. Reviewed By: larryliu0820 Differential Revision: D93904683
d11e2d6 to
05b4b89
Compare
|
OK I think finally ready for NXP |
This is currently a disabled test on our CI as this feature was not implemented yet. Let's not block this PR any further. I will raise a bugfix PR later if needed... |
Summary: Earlier PR adds support for a pass that quantizes the bias resulting from QAT ConvBN fusion without an initial bias. This PR adds it to the NXP calibrate_and_quantize method. Differential Revision: D93904683 cc @robert-kalmar @digantdesai

Summary:
Earlier PR adds support for a pass that quantizes the bias resulting from QAT ConvBN fusion without an initial bias.
This PR adds it to the NXP calibrate_and_quantize method.
Differential Revision: D93904683
cc @robert-kalmar @digantdesai