-
Notifications
You must be signed in to change notification settings - Fork 344
[CPU] Add ops for float8 linear #3052
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3052
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit fd3d6b5 with merge base 8e2ca35 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
CC @mingfeima This PR is a copy of the op implementation of the previous PR #2505 because the original PR is too big. Thanks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Sorry @Xia-Weiwen @jerryzh168 but this is breaking some internal tests, going to revert: https://www.internalfb.com/tasks/?t=239795912 It looks to be complaining about a warning with the switch statement:
|
Summary
This is part of a previous PR #2505 since the original is too big.
This PR adds two ops for float8 linear on CPU, one for weight packing and the other for computation.
They will be used for float8 tensor subclass in the future.
Test plan