You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: examples/community/README.md
+112Lines changed: 112 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -70,6 +70,7 @@ Please also check out our [Community Scripts](https://github.com/huggingface/dif
70
70
| Stable Diffusion XL IPEX Pipeline | Accelerate Stable Diffusion XL inference pipeline with BF16/FP32 precision on Intel Xeon CPUs with [IPEX](https://github.com/intel/intel-extension-for-pytorch)|[Stable Diffusion XL on IPEX](#stable-diffusion-xl-on-ipex)| - |[Dan Li](https://github.com/ustcuna/)|
71
71
| Stable Diffusion BoxDiff Pipeline | Training-free controlled generation with bounding boxes using [BoxDiff](https://github.com/showlab/BoxDiff)|[Stable Diffusion BoxDiff Pipeline](#stable-diffusion-boxdiff)| - |[Jingyang Zhang](https://github.com/zjysteven/)|
72
72
| FRESCO V2V Pipeline | Implementation of [[CVPR 2024] FRESCO: Spatial-Temporal Correspondence for Zero-Shot Video Translation](https://arxiv.org/abs/2403.12962)|[FRESCO V2V Pipeline](#fresco)| - |[Yifan Zhou](https://github.com/SingleZombie)|
73
+
| AnimateDiff IPEX Pipeline | Accelerate AnimateDiff inference pipeline with BF16/FP32 precision on Intel Xeon CPUs with [IPEX](https://github.com/intel/intel-extension-for-pytorch)|[AnimateDiff on IPEX](#animatediff-on-ipex)| - |[Dan Li](https://github.com/ustcuna/)|
73
74
74
75
To load a custom pipeline you just need to pass the `custom_pipeline` argument to `DiffusionPipeline`, as one of the files in `diffusers/examples/community`. Feel free to send a PR with your own pipelines, we will merge them quickly.
This diffusion pipeline aims to accelerate the inference of AnimateDiff on Intel Xeon CPUs with BF16/FP32 precision using [IPEX](https://github.com/intel/intel-extension-for-pytorch).
**Note:** For each PyTorch release, there is a corresponding release of IPEX. Here is the mapping relationship. It is recommended to install Pytorch/IPEX2.3 to get the best performance.
2. After pipeline initialization, `prepare_for_ipex()` should be called to enable IPEX accelaration. Supported inference datatypes are Float32 and BFloat16.
The following code compares the performance of the original animatediff pipeline with the ipex-optimized pipeline.
4145
+
By using this optimized pipeline, we can get about 1.5-2.2 times performance boost with BFloat16 on the fifth generation of Intel Xeon CPUs, code-named Emerald Rapids.
4146
+
4147
+
```python
4148
+
import torch
4149
+
from diffusers import MotionAdapter, AnimateDiffPipeline, EulerDiscreteScheduler
4150
+
from safetensors.torch import load_file
4151
+
from pipeline_animatediff_ipex import AnimateDiffPipelineIpex
0 commit comments