Skip to content

Haoming02/sd-webui-forge-classic

 
 

Repository files navigation

Stable Diffusion WebUI Forge - Classic

UI

Stable Diffusion WebUI Forge is a platform on top of the original Stable Diffusion WebUI by AUTOMATIC1111, to make development easier, optimize resource management, speed up inference, and study experimental features.
The name "Forge" is inspired by "Minecraft Forge". This project aims to become the Forge of Stable Diffusion WebUI.

- lllyasviel
(paraphrased)


"Classic" mainly serves as an archive for the "previous" version of Forge, which was built on Gradio 3.41.2 before the major changes (see the original announcement) were introduced. Additionally, this fork is focused exclusively on SD1 and SDXL checkpoints, having various optimizations implemented, with the main goal of being the lightest WebUI without any bloatwares.

Installation

(Unscientific) Comparisons
Forge Classic Forge previous Forge main reForge main
1 Size 4.3 MB 6.8 MB 2 18.5 MB 7.8 MB
3 Startup 4.5s 4 9.5s 5.2s 5.7s

1: using the Download ZIP button on GitHub
2: the large size is from backend/huggingface
3: using only --xformers flag; disable all extra extensions; does not include import torch time
4: the long time is from requirement conflicts


Features [Mar. 11]

Most base features of the original Automatic1111 Webui should still function

New Features

  • Support uv package manager
  • Support SageAttention
    • requires RTX 30 +
    • ~5% speed up; only supports SDXL
    • see Commandline
  • Support fast cublas operation (CublasLinear)
    • requires manually installing cublas_ops package
    • ~25% speed up
  • Support fast fp8 operation (torch._scaled_mm)
    • requires RTX 40 +
    • ~10% speed up; reduce quality

Note

The cublas_ops requires fp16 precision, thus is not compatible with fp8 settings

  • Support v-pred SDXL checkpoints (eg. NoobAI)
  • Implement RescaleCFG
    • reduce burnt colors; mainly for v-pred
  • Implement diskcache
    • (backported from Automatic1111 Webui upstream)
  • Implement skip_early_cond
    • (backported from Automatic1111 Webui upstream)
  • Update spandrel
    • support most modern Upscaler architecture
  • Add pillow-heif package
    • support .avif and .heif formats
  • Add an option to disable Refiner
  • Add an option to disable ExtraNetworks Tree View
  • Support Union / ProMax ControlNet
    • I just made them always show up in the dropdown

Removed Features

  • SD2
  • Alt-Diffusion
  • Instruct-Pix2Pix
  • Hypernetworks
  • SVD
  • Z123
  • CLIP Interrogator
  • Deepbooru Interrogator
  • Textual Inversion Training
  • Checkpoint Merging
  • Most built-in Extensions
  • Some built-in Scripts
  • The test scripts
  • Photopea and openpose_editor (ControlNet)

Optimizations

  • Fix Memory Leak when switching Checkpoints
  • Fix pydantic Errors
  • Check for Extension Updates in Parallel
  • Clean up the ldm_patched (ie. comfy) folder
  • Remove unused cmd_args
  • Remove unused shared_options
  • Remove unused args_parser
  • Remove large amount of legacy code
  • Remove duplicated upscaler codes
    • put every upscaler inside the ESRGAN folder
  • Improve code logics
  • Improve hash caching
  • Improve error logs
    • no longer prints TypeError: 'NoneType' object is not iterable
  • Moved embeddings folder into models folder
  • ControlNet Rewrite
    • change Units to gr.Tab
    • remove multi-inputs, as they are "misleading"
    • change visible toggle to interactive toggle; now the UI will no longer jump around
    • improved Presets application
  • Lint & Format most of the Python and JavaScript codes
  • Update to latest PyTorch
    • currently 2.6.0+cu126
  • Run Clip on CPU by default
  • Update recommended Python to 3.11.9
  • use_checkpoint: False
  • many more... ™️

Commandline

These flags can be added after the set COMMANDLINE_ARGS= line in the webui-user.bat (separate each flag with space)

A1111 built-in

  • --no-download-sd-model: Do not download a default checkpoint
    • can be removed after you download some checkpoints of your choice
  • --xformers: Install the xformers package to speed up generation
  • --port: Specify a server port to use
    • defaults to 7860
  • --api: Enable API access

  • Once you have successfully launched the WebUI, you can add the following flags to bypass some validation steps in order to improve the Startup time
    • --skip-prepare-environment
    • --skip-install
    • --skip-python-version-check
    • --skip-torch-cuda-test
    • --skip-version-check

Important

Remove them if you are installing an Extension, as those also block Extension from installing requirements

by. Forge

  • For RTX 30 and above, you can add the following flags to slightly increase the performance; but in rare occurrences, they may cause OutOfMemory errors or even crash the WebUI; and in certain configurations, they may even lower the speed instead
    • --cuda-malloc
    • --cuda-stream
    • --pin-shared-memory

by. Classic

  • --uv: Replace the python -m pip calls with uv pip to massively speed up package installation
  • --sage: Install the sageattention package to speed up generation
    • requires RTX 30 +
    • requires manually installing triton
    • only affects SDXL

Tip

--xformers is still recommended even if you already have --sage, as sageattention does not speed up VAE while xformers does

  • --model-ref: Points to a central models folder that contains all your models
    • said folder should contain subfolders like Stable-diffusion, Lora, VAE, ESRGAN, etc.

Important

This simply replaces the models folder, rather than adding on top of it


Installation

  1. Install git
  2. Install Python
  3. Clone the Repo
    git clone https://github.com/Haoming02/sd-webui-forge-classic
  4. Prepare uv (if you installed it)
    1. Set up venv
      cd sd-webui-forge-classic
      uv venv venv --python 3.11
    2. Add the --uv flag (see Commandline)
  5. Launch the Webui via webui-user.bat
  6. On first launch, it will automatically install all the requirements
  7. Once installation is finished, the Webui will start in a browser automatically

GitHub Related
  • Issues about removed features will simply be ignored; Issues regarding installation will also be ignored if it's obviously user-error
  • Feature Request not related to performance or optimization will simply be ignored
    • For cutting edge features, use reForge instead

Special thanks to AUTOMATIC1111, lllyasviel, and comfyanonymous, kijai,
along with the rest of the contributors,
for their invaluable efforts in the open-source image generation community

Languages

  • Python 93.2%
  • JavaScript 2.4%
  • Cuda 2.2%
  • C++ 1.2%
  • CSS 0.6%
  • Shell 0.2%
  • Other 0.2%