Skip to content

Pinned Loading

  1. vllm vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 65k 11.8k

  2. llm-compressor llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    Python 2.4k 312

  3. recipes recipes Public

    Common recipes to run vLLM

    Jupyter Notebook 270 99

  4. speculators speculators Public

    A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM

    Python 146 20

Repositories

Showing 10 of 28 repositories