Skip to content
This repository was archived by the owner on Aug 7, 2024. It is now read-only.

Commit e6bb1eb

Browse files
vkuzofacebook-github-bot
authored andcommitted
fix README.md description of swap_linear_with_float8_linear (#319)
Summary: Pull Request resolved: #319 Bringing the readme up to date with the PR that deleted `Float8DynamicLinear`. Reviewed By: wanchaol Differential Revision: D59874121 fbshipit-source-id: e2af494e2b34889b580bedc341caaead345028f1
1 parent 38c02fe commit e6bb1eb

File tree

1 file changed

+2
-4
lines changed

1 file changed

+2
-4
lines changed

README.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -39,13 +39,12 @@ from float8_experimental.float8_linear_utils import (
3939
swap_linear_with_float8_linear,
4040
)
4141
from float8_experimental.fsdp_utils import precompute_float8_dynamic_scale_for_fsdp
42-
from float8_experimental.float8_linear import Float8Linear
4342

4443
# create model
4544
m = Model(...)
4645

4746
# convert all `torch.nn.Linear` modules to `Float8Linear`
48-
swap_linear_with_float8_linear(m, Float8Linear)
47+
swap_linear_with_float8_linear(m)
4948

5049
# optional: use FSDP
5150
model = FSDP(model, use_orig_params=True)
@@ -76,7 +75,7 @@ from float8_experimental.float8_linear_utils import (
7675
swap_linear_with_float8_linear,
7776
sync_float8_amax_and_scale_history,
7877
)
79-
from float8_experimental.float8_linear import Float8Linear, TensorScalingType
78+
from float8_experimental.float8_linear import TensorScalingType
8079

8180
# create model
8281
m = Model(...)
@@ -85,7 +84,6 @@ m = Model(...)
8584
# type
8685
swap_linear_with_float8_linear(
8786
m,
88-
Float8Linear,
8987
scaling_type_x=TensorScalingType.DELAYED,
9088
scaling_type_w=TensorScalingType.DELAYED,
9189
scaling_type_dL_dY=TensorScalingType.DELAYED,

0 commit comments

Comments
 (0)