Skip to content

Conversation

@andrewor14
Copy link
Contributor

@andrewor14 andrewor14 commented Oct 12, 2025

Summary: Support a few extra ops called during GRPO loop in unsloth/vllm for Float8Tensor.

Test Plan:

python test/quantization/quantize_/workflows/float8/test_float8_tensor.py -k test_fp8_matmul_lora_variants
python test/quantization/quantize_/workflows/float8/test_float8_tensor.py -k test_to_dtype_layout
python test/quantization/quantize_/workflows/float8/test_float8_tensor.py -k test_has_compatible_shallow_copy_type
python test/quantization/quantize_/workflows/float8/test_float8_tensor.py -k test_transpose

@pytorch-bot
Copy link

pytorch-bot bot commented Oct 12, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3158

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 New Failures, 1 Unrelated Failure

As of commit 82012af with merge base f856d36 (image):

NEW FAILURES - The following jobs have failed:

BROKEN TRUNK - The following job failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@andrewor14 andrewor14 marked this pull request as draft October 12, 2025 22:21
@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 12, 2025
@andrewor14 andrewor14 added the topic: improvement Use this tag if this PR is an improvement (doesn't fit into any of the other categories) label Oct 12, 2025
@andrewor14 andrewor14 changed the title [HACK] Update Float8Tensor for GRPO training in unsloth [draft] Update Float8Tensor for GRPO training in unsloth Oct 13, 2025
@andrewor14 andrewor14 force-pushed the unsloth-fp8-rl-test branch 3 times, most recently from 345bb63 to 9d27057 Compare October 29, 2025 16:32
@andrewor14 andrewor14 changed the title [draft] Update Float8Tensor for GRPO training in unsloth Update Float8Tensor for GRPO training in unsloth Oct 29, 2025
@andrewor14 andrewor14 requested a review from jerryzh168 October 29, 2025 20:15
@andrewor14 andrewor14 marked this pull request as ready for review October 29, 2025 20:15
Copy link
Contributor

@vkuzo vkuzo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

plz clean up _float8_mm_impl

@andrewor14 andrewor14 requested a review from vkuzo October 30, 2025 23:46
input_tensor: Float8Tensor,
weight_tensor: Float8Tensor,
bias: Optional[torch.Tensor] = None,
weight_is_already_transposed: bool = False,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

instead of this flag, just transpose at the callsite to match the meaning of matmul

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason behind this flag is to prevent unnecessary double transpose when we call linear, since the fbgemm op expects the weight to be in the linear format (already transposed). So if we don't have this flag:

1. linear calls _float8_mm_impl(input, weight.t())
2. _float8_mm_impl calls weight.t() before calling torch.ops.fbgemm.f8f8bf16

If we just transpose the weight for linear, we may end up slowing linear down, is that OK?

)


def _float8_matmul_impl(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how about

  1. define _float8_mm_impl as the float8 version of torch.mm, as the lowest level shared code of maybe quantizing the input and then choosing a gemm
  2. all other functions (matmul, linear, etc) call _float8_mm_impl

it's a bit confusing to have two different paths for linear and matmul

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was done to avoid double transpose in the linear path (which doesn't happen today, see this comment). I agree that ideally everything should go through _float8_mm_impl, but doing so may add overhead for the linear path, should I go ahead and merge the implementations anyway?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, just did some benchmark on the double transpose thing, seems like it didn't introduce much overhead. I refactored the code to the way you suggested, please have another look

@andrewor14 andrewor14 force-pushed the unsloth-fp8-rl-test branch 4 times, most recently from 1619676 to a323bbe Compare October 31, 2025 21:28
**Summary:** Support a few extra ops called during GRPO loop in unsloth/vllm for Float8Tensor.

**Test Plan:**

```
python test/quantization/quantize_/workflows/float8/test_float8_tensor.py -k test_fp8_matmul_lora_variants
python test/quantization/quantize_/workflows/float8/test_float8_tensor.py -k test_to_dtype_layout
python test/quantization/quantize_/workflows/float8/test_float8_tensor.py -k test_has_compatible_shallow_copy_type
python test/quantization/quantize_/workflows/float8/test_float8_tensor.py -k test_transpose
```
@andrewor14 andrewor14 requested a review from vkuzo October 31, 2025 22:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. topic: improvement Use this tag if this PR is an improvement (doesn't fit into any of the other categories)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants