Conversation

SS-JIA

Stack from ghstack (oldest at bottom):

Changes

  • Handle cases where an operator needs to specify a separate storage type / memory layout for each individual output.

Motivation

Required for the group norm operator.

Future Work

Currently, the tag_memory_meta_pass graph pass assumes that all tensors participating in a computation (aside from weights) will have the same storage type and memory layout. As more operators are being added, there are more exceptions to this rule.

The pass may need an update in the near future to make it possible to specify required storage types and memory layouts on a more granular level.

Differential Revision: D77038781

… operator + register group norm operator

## Changes

* Handle cases where an operator needs to specify a separate storage type / memory layout for each individual output.

## Motivation

Required for the group norm operator.

## Future Work

Currently, the `tag_memory_meta_pass` graph pass assumes that all tensors participating in a computation (aside from weights) will have the same storage type and memory layout. As more operators are being added, there are more exceptions to this rule.

The pass may need an update in the near future to make it possible to specify required storage types and memory layouts on a more granular level.

Differential Revision: [D77038781](https://our.internmc.facebook.com/intern/diff/D77038781/)

[ghstack-poisoned]
@pytorch-botPyTorch Bot

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/11828

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit f58665b with merge base 89bdd1d (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

SS-JIA added a commit that referenced this pull request Jun 20, 2025
… operator + register group norm operator

## Changes

* Handle cases where an operator needs to specify a separate storage type / memory layout for each individual output.

## Motivation

Required for the group norm operator.

## Future Work

Currently, the `tag_memory_meta_pass` graph pass assumes that all tensors participating in a computation (aside from weights) will have the same storage type and memory layout. As more operators are being added, there are more exceptions to this rule.

The pass may need an update in the near future to make it possible to specify required storage types and memory layouts on a more granular level.

Differential Revision: [D77038781](https://our.internmc.facebook.com/intern/diff/D77038781/)

ghstack-source-id: 291701330
Pull Request resolved: #11828
@facebook-github-botfacebook--bot added the CLA SignedThis label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.label Jun 20, 2025
@facebook-github-bot

This pull request was exported from Phabricator. Differential Revision: D77038781

@github-actionsGitHub Actions

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorct, for example
@pytorct label "release notes: none"

For more information, see
https://.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

…outs for an operator + register group norm operator"

## Changes

* Handle cases where an operator needs to specify a separate storage type / memory layout for each individual output.

## Motivation

Required for the group norm operator.

## Future Work

Currently, the `tag_memory_meta_pass` graph pass assumes that all tensors participating in a computation (aside from weights) will have the same storage type and memory layout. As more operators are being added, there are more exceptions to this rule.

The pass may need an update in the near future to make it possible to specify required storage types and memory layouts on a more granular level.

Differential Revision: [D77038781](https://our.internmc.facebook.com/intern/diff/D77038781/)

[ghstack-poisoned]
SS-JIA added a commit that referenced this pull request Jun 23, 2025
… operator + register group norm operator

Pull Request resolved: #11828

## Changes

* Handle cases where an operator needs to specify a separate storage type / memory layout for each individual output.

## Motivation

Required for the group norm operator.

## Future Work

Currently, the `tag_memory_meta_pass` graph pass assumes that all tensors participating in a computation (aside from weights) will have the same storage type and memory layout. As more operators are being added, there are more exceptions to this rule.

The pass may need an update in the near future to make it possible to specify required storage types and memory layouts on a more granular level.
ghstack-source-id: 292082369
@exported-using-ghexport

Differential Revision: [D77038781](https://our.internmc.facebook.com/intern/diff/D77038781/)
@facebook-github-bot

This pull request was exported from Phabricator. Differential Revision: D77038781

…outs for an operator + register group norm operator"

## Changes

* Handle cases where an operator needs to specify a separate storage type / memory layout for each individual output.

## Motivation

Required for the group norm operator.

## Future Work

Currently, the `tag_memory_meta_pass` graph pass assumes that all tensors participating in a computation (aside from weights) will have the same storage type and memory layout. As more operators are being added, there are more exceptions to this rule.

The pass may need an update in the near future to make it possible to specify required storage types and memory layouts on a more granular level.

Differential Revision: [D77038781](https://our.internmc.facebook.com/intern/diff/D77038781/)

[ghstack-poisoned]
SS-JIA added a commit that referenced this pull request Jun 23, 2025
… operator + register group norm operator

Pull Request resolved: #11828

## Changes

* Handle cases where an operator needs to specify a separate storage type / memory layout for each individual output.

## Motivation

Required for the group norm operator.

## Future Work

Currently, the `tag_memory_meta_pass` graph pass assumes that all tensors participating in a computation (aside from weights) will have the same storage type and memory layout. As more operators are being added, there are more exceptions to this rule.

The pass may need an update in the near future to make it possible to specify required storage types and memory layouts on a more granular level.
ghstack-source-id: 292141398
@exported-using-ghexport

Differential Revision: [D77038781](https://our.internmc.facebook.com/intern/diff/D77038781/)
@facebook-github-bot

This pull request was exported from Phabricator. Differential Revision: D77038781

…outs for an operator + register group norm operator"

## Changes

* Handle cases where an operator needs to specify a separate storage type / memory layout for each individual output.

## Motivation

Required for the group norm operator.

## Future Work

Currently, the `tag_memory_meta_pass` graph pass assumes that all tensors participating in a computation (aside from weights) will have the same storage type and memory layout. As more operators are being added, there are more exceptions to this rule.

The pass may need an update in the near future to make it possible to specify required storage types and memory layouts on a more granular level.

Differential Revision: [D77038781](https://our.internmc.facebook.com/intern/diff/D77038781/)

[ghstack-poisoned]
SS-JIA added a commit that referenced this pull request Jun 24, 2025
…r an operator + register group norm operator

Pull Request resolved: #11828

## Changes

* Handle cases where an operator needs to specify a separate storage type / memory layout for each individual output.

## Motivation

Required for the group norm operator.

## Future Work

Currently, the `tag_memory_meta_pass` graph pass assumes that all tensors participating in a computation (aside from weights) will have the same storage type and memory layout. As more operators are being added, there are more exceptions to this rule.

The pass may need an update in the near future to make it possible to specify required storage types and memory layouts on a more granular level.
ghstack-source-id: 292374227
@exported-using-ghexport

Differential Revision: [D77038781](https://our.internmc.facebook.com/intern/diff/D77038781/)
@facebook-github-bot

This pull request was exported from Phabricator. Differential Revision: D77038781

…outs for an operator + register group norm operator"

## Changes

* Handle cases where an operator needs to specify a separate storage type / memory layout for each individual output.

## Motivation

Required for the group norm operator.

## Future Work

Currently, the `tag_memory_meta_pass` graph pass assumes that all tensors participating in a computation (aside from weights) will have the same storage type and memory layout. As more operators are being added, there are more exceptions to this rule.

The pass may need an update in the near future to make it possible to specify required storage types and memory layouts on a more granular level.

Differential Revision: [D77038781](https://our.internmc.facebook.com/intern/diff/D77038781/)

[ghstack-poisoned]
SS-JIA added a commit that referenced this pull request Jun 24, 2025
…r an operator + register group norm operator

Pull Request resolved: #11828

## Changes

* Handle cases where an operator needs to specify a separate storage type / memory layout for each individual output.

## Motivation

Required for the group norm operator.

## Future Work

Currently, the `tag_memory_meta_pass` graph pass assumes that all tensors participating in a computation (aside from weights) will have the same storage type and memory layout. As more operators are being added, there are more exceptions to this rule.

The pass may need an update in the near future to make it possible to specify required storage types and memory layouts on a more granular level.
ghstack-source-id: 292435339
@exported-using-ghexport

Differential Revision: [D77038781](https://our.internmc.facebook.com/intern/diff/D77038781/)
@facebook-github-bot

This pull request was exported from Phabricator. Differential Revision: D77038781

…outs for an operator + register group norm operator"

## Changes

* Handle cases where an operator needs to specify a separate storage type / memory layout for each individual output.

## Motivation

Required for the group norm operator.

## Future Work

Currently, the `tag_memory_meta_pass` graph pass assumes that all tensors participating in a computation (aside from weights) will have the same storage type and memory layout. As more operators are being added, there are more exceptions to this rule.

The pass may need an update in the near future to make it possible to specify required storage types and memory layouts on a more granular level.

Differential Revision: [D77038781](https://our.internmc.facebook.com/intern/diff/D77038781/)

[ghstack-poisoned]
SS-JIA added a commit that referenced this pull request Jun 25, 2025
…r an operator + register group norm operator

Pull Request resolved: #11828

## Changes

* Handle cases where an operator needs to specify a separate storage type / memory layout for each individual output.

## Motivation

Required for the group norm operator.

## Future Work

Currently, the `tag_memory_meta_pass` graph pass assumes that all tensors participating in a computation (aside from weights) will have the same storage type and memory layout. As more operators are being added, there are more exceptions to this rule.

The pass may need an update in the near future to make it possible to specify required storage types and memory layouts on a more granular level.
ghstack-source-id: 292530159
@exported-using-ghexport

Differential Revision: [D77038781](https://our.internmc.facebook.com/intern/diff/D77038781/)
@facebook-github-bot

This pull request was exported from Phabricator. Differential Revision: D77038781

@facebook-github-botfacebook--bot merged commit 8c09745 into gh/SS-JIA/248/base Jun 25, 2025
96 of 98 checks passed
@facebook-github-botfacebook--bot deleted the gh/SS-JIA/248/head branch June 25, 2025 16:37
Sign up for free to join this conversation on . Already have an account? Sign in to comment
CLA SignedThis label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.fb-exported
None yet

Successfully merging this pull request may close these issues.