Stable Diffusion WebUI Forge is a platform on top of the original Stable Diffusion WebUI by AUTOMATIC1111, to make development easier, optimize resource management, speed up inference, and study experimental features.
The name "Forge" is inspired by "Minecraft Forge". This project aims to become the Forge of Stable Diffusion WebUI.
- lllyasviel
(paraphrased)
"Classic" mainly serves as an archive for the "previous
" version of Forge, which was built on Gradio 3.41.2
before the major changes (see the original announcement) were introduced. Additionally, this fork is focused exclusively on SD1 and SDXL checkpoints, having various optimizations implemented, with the main goal of being the lightest WebUI without any bloatwares.
(Unscientific) Comparisons
Forge Classic | Forge previous |
Forge main |
reForge main |
|
---|---|---|---|---|
1 Size | 4.3 MB | 6.8 MB | 2 18.5 MB | 7.8 MB |
3 Startup | 4.5s | 4 9.5s | 5.2s | 5.7s |
1: using the
Download ZIP
button on GitHub
2: the large size is frombackend/huggingface
3: using only--xformers
flag; disable all extra extensions; does not includeimport torch
time
4: the long time is from requirement conflicts
Most base features of the original Automatic1111 Webui should still function
- Support uv package manager
- requires uv
- see Commandline
- Support SageAttention
- requires RTX 30 +
- ~5% speed up; only supports SDXL
- see Commandline
- Support fast
cublas
operation (CublasLinear
)- requires manually installing cublas_ops package
- ~25% speed up
- Support fast
fp8
operation (torch._scaled_mm
)- requires RTX 40 +
- ~10% speed up; reduce quality
Note
The cublas_ops
requires fp16
precision, thus is not compatible with fp8
settings
- Support
v-pred
SDXL checkpoints (eg. NoobAI) - Implement RescaleCFG
- reduce burnt colors; mainly for
v-pred
- reduce burnt colors; mainly for
- Implement
diskcache
- (backported from Automatic1111 Webui upstream)
- Implement
skip_early_cond
- (backported from Automatic1111 Webui upstream)
- Update
spandrel
- support most modern Upscaler architecture
- Add
pillow-heif
package- support
.avif
and.heif
formats
- support
- Add an option to disable Refiner
- Add an option to disable ExtraNetworks Tree View
- Support Union / ProMax ControlNet
- I just made them always show up in the dropdown
- SD2
- Alt-Diffusion
- Instruct-Pix2Pix
- Hypernetworks
- SVD
- Z123
- CLIP Interrogator
- Deepbooru Interrogator
- Textual Inversion Training
- Checkpoint Merging
- Most built-in Extensions
- Some built-in Scripts
- The
test
scripts -
Photopea
andopenpose_editor
(ControlNet)
- Fix Memory Leak when switching Checkpoints
- Fix
pydantic
Errors - Check for Extension Updates in Parallel
- Clean up the
ldm_patched
(ie.comfy
) folder - Remove unused
cmd_args
- Remove unused
shared_options
- Remove unused
args_parser
- Remove large amount of legacy code
- Remove duplicated upscaler codes
- put every upscaler inside the
ESRGAN
folder
- put every upscaler inside the
- Improve code logics
- Improve hash caching
- Improve error logs
- no longer prints
TypeError: 'NoneType' object is not iterable
- no longer prints
- Moved
embeddings
folder intomodels
folder - ControlNet Rewrite
- change Units to
gr.Tab
- remove multi-inputs, as they are "misleading"
- change
visible
toggle tointeractive
toggle; now the UI will no longer jump around - improved
Presets
application
- change Units to
- Lint & Format most of the Python and JavaScript codes
- Update to latest PyTorch
- currently
2.6.0+cu126
- currently
- Run
Clip
on CPU by default - Update recommended Python to
3.11.9
-
use_checkpoint: False
- many more... ™️
These flags can be added after the
set COMMANDLINE_ARGS=
line in thewebui-user.bat
(separate each flag with space)
--no-download-sd-model
: Do not download a default checkpoint- can be removed after you download some checkpoints of your choice
--xformers
: Install thexformers
package to speed up generation--port
: Specify a server port to use- defaults to
7860
- defaults to
--api
: Enable API access
- Once you have successfully launched the WebUI, you can add the following flags to bypass some validation steps in order to improve the Startup time
--skip-prepare-environment
--skip-install
--skip-python-version-check
--skip-torch-cuda-test
--skip-version-check
Important
Remove them if you are installing an Extension, as those also block Extension from installing requirements
- For RTX 30 and above, you can add the following flags to slightly increase the performance; but in rare occurrences, they may cause
OutOfMemory
errors or even crash the WebUI; and in certain configurations, they may even lower the speed instead--cuda-malloc
--cuda-stream
--pin-shared-memory
--uv
: Replace thepython -m pip
calls withuv pip
to massively speed up package installation- requires uv to be installed first (see Installation)
--sage
: Install thesageattention
package to speed up generation- requires RTX 30 +
- requires manually installing triton
- only affects SDXL
Tip
--xformers
is still recommended even if you already have --sage
, as sageattention
does not speed up VAE while xformers
does
--model-ref
: Points to a centralmodels
folder that contains all your models- said folder should contain subfolders like
Stable-diffusion
,Lora
,VAE
,ESRGAN
, etc.
- said folder should contain subfolders like
Important
This simply replaces the models
folder, rather than adding on top of it
- Install git
- Install Python
- (Recommended) Install uv
- (Manual) Install Python 3.11.9
- Clone the Repo
git clone https://github.com/Haoming02/sd-webui-forge-classic
- Prepare uv (if you installed it)
- Set up venv
cd sd-webui-forge-classic uv venv venv --python 3.11
- Add the
--uv
flag (see Commandline)
- Set up venv
- Launch the Webui via
webui-user.bat
- On first launch, it will automatically install all the requirements
- Once installation is finished, the Webui will start in a browser automatically
GitHub Related
- Issues about removed features will simply be ignored; Issues regarding installation will also be ignored if it's obviously user-error
- Feature Request not related to performance or optimization will simply be ignored
- For cutting edge features, use reForge instead
Special thanks to AUTOMATIC1111, lllyasviel, and comfyanonymous, kijai,
along with the rest of the contributors,
for their invaluable efforts in the open-source image generation community