-
Notifications
You must be signed in to change notification settings - Fork 469
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ORTDiffusionPipeline
s with IO Binding
#2056
base: main
Are you sure you want to change the base?
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
if self.use_io_binding is False and provider == "CUDAExecutionProvider": | ||
self.use_io_binding = True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This overrides use_io_binding choice from user. What if user want to run performance test with io binding disabled?
I suggest that:
if use_io_binding is None: change it to True
if not use_io_binding and it is cuda provider, log a warning.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is already the default behavior in ORTModels, I kept it for consistency (I'm not a fan of it tbh) to not break stuff for old users.
def providers(self) -> Tuple[str]: | ||
return self._validate_same_attribute_value_across_components("providers") | ||
|
||
@property | ||
def provider(self) -> str: | ||
return self._validate_same_attribute_value_across_components("provider") | ||
|
||
@property | ||
def providers_options(self) -> Dict[str, Dict[str, Any]]: | ||
return self._validate_same_attribute_value_across_components("providers_options") | ||
|
||
@property | ||
def provider_options(self) -> Dict[str, Any]: | ||
return self._validate_same_attribute_value_across_components("provider_options") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is not necessary to validate same value across components.
I think it is feasible to use different provider and different provider options for components. For example, we can run text_encoder with CPU, and unet with CUDA provider. Or we want to enable cuda graph in one component but not the other in provider option.
May add some comments and loose the constraint later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there's a comment in _validate_same_attribute_value_across_components
definition explaining the reasoning behind these checks, which is exactly what you said. Pipeline attributes can be accessed but they only make sense when they're consistent, for now this is my proposition for multi model parts pipelines, an alternative would be to return that of the main component (unet/transformer) or not supporting these attributes at all for the main pipeline (replace them with provider_map for example like device vs device_map).
|
||
return resolved_output_shapes | ||
|
||
def _prepare_io_binding(self, model_inputs: torch.Tensor) -> Tuple[ort.IOBinding, Dict[str, torch.Tensor]]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
model_inputs data type is Dict[str, torch.Tensor]
shape=tuple(self._output_buffers[output_name].size()), | ||
) | ||
|
||
return io_binding, model_inputs, self._output_buffers |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
model_inputs are not used by caller. Not need to return here.
io_binding.bind_input( | ||
name=input_name, | ||
device_type=self.device.type, | ||
device_id=self.device.index if self.device.index is not None else -1, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggest to assert self.device.index is not None
.
ORT does not handle device id -1
|
||
return self | ||
|
||
def _get_output_shapes(self, **model_inputs: torch.Tensor) -> Dict[str, int]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function is very slow.
An example improvement can be found (might be a little hacky): tianleiwu@dde8a73
The performance impact for image size 512x512 and 50 steps on H100_80GB_HBM3:
- 588 ms without IO Binding.
- 649 ms with IO Binding and current implementation of _get_output_shapes.
- 572 ms with IO Binding with updated output shape logic.
BTW, the return data type for shape is Tupe[int, ...] instead of int.
name=input_name, | ||
device_type=self.device.type, | ||
device_id=self.device.index if self.device.index is not None else -1, | ||
element_type=TypeHelper.ort_type_to_numpy_type(self.input_dtypes[input_name]), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For onnxruntime 1.20 or later, recommend using onnx type instead of numpy type here. It is because numpy does not support bfloat16, float8; but onnx type supports it.
The mapping from ort type to onnx type is like:
{
"tensor(float)": onnx.TensorProto.FLOAT,
"tensor(float16)": onnx.TensorProto.FLOAT16,
...
}
### Description Update stable diffusion benchmark: (1) allow IO binding for optimum. (2) do not use num_images_per_prompt across all engines for fair comparison. Example to run benchmark of optimum on stable diffusion 1.5: ``` git clone https://github.com/tianleiwu/optimum cd optimum git checkout tlwu/diffusers-io-binding pip install -e . pip install -U onnxruntime-gpu git clone https://github.com/microsoft/onnxruntime cd onnxruntime/onnxruntime/python/tools/transformers/models/stable_diffusion git checkout tlwu/benchmark_sd_optimum_io_binding pip install -r requirements/cuda12/requirements.txt optimum-cli export onnx --model runwayml/stable-diffusion-v1-5 --task text-to-image ./sd_onnx_fp32 python optimize_pipeline.py -i ./sd_onnx_fp32 -o ./sd_onnx_fp16 --float16 python benchmark.py -e optimum -r cuda -v 1.5 -p ./sd_onnx_fp16 python benchmark.py -e optimum -r cuda -v 1.5 -p ./sd_onnx_fp16 --use_io_binding ``` Example output in H100_80GB_HBM3: 572 ms with IO Binding; 588 ms without IO Binding; IO binding gains 16ms, or 2.7%, ### Motivation and Context Optimum is working on enabling I/O binding: huggingface/optimum#2056. This could help testing the impact of I/O binding on the performance of the stable diffusion.
What does this PR do?
This is also my attempt to create a generalizable io binding framework, the idea is to always have
output_shapes = fn(input_shapes, known_shapes)
whereknown_shapes
is mostly stuff we find in the config, we the use this information at runtime with a simple symbolic resolver, keeping the shape inference time minimal, to create output tensors in torch and thus accelerate inference without the need to pass by ort values / cupy / numpy.Before submitting
Who can review?