Skip to content
@xlite-dev

xlite-dev

Develop ML/AI toolkits and ML/AI/CUDA Learning resources.

Pinned Loading

  1. lite.ai.toolkit lite.ai.toolkit Public

    🛠 A lite C++ AI toolkit: 100+🎉 models (Stable-Diffusion, FaceFusion, YOLO series, Det, Seg, Matting) with MNN, ORT and TensorRT.

    C++ 4.1k 743

  2. LeetCUDA LeetCUDA Public

    📚LeetCUDA: Modern CUDA Learn Notes with PyTorch for Beginners🐑, 200+ CUDA/Tensor Cores Kernels, HGEMM, FA-2 MMA etc.🔥

    Cuda 4.7k 491

  3. Awesome-LLM-Inference Awesome-LLM-Inference Public

    📚A curated list of Awesome LLM/VLM Inference Papers with codes: WINT8/4, FlashAttention, PagedAttention, Parallelism, MLA, etc.

    Python 4.1k 282

  4. Awesome-DiT-Inference Awesome-DiT-Inference Public

    📚A curated list of Awesome Diffusion Inference Papers with codes: Sampling, Caching, Multi-GPUs, etc. 🎉🎉

    255 15

  5. ffpa-attn ffpa-attn Public

    📚FFPA(Split-D): Extend FlashAttention with Split-D for large headdim, O(1) GPU SRAM complexity, 1.8x~3x↑🎉 faster than SDPA EA.

    Cuda 184 8

  6. lihang-notes lihang-notes Public

    📚《统计学习方法-李航: 笔记-从原理到实现》 这是一份非常详细的学习笔记,200页PDF,各种手推公式细节讲解以及R语言实现. 🎉

    Shell 463 57

Repositories

Showing 10 of 24 repositories
  • Awesome-LLM-Inference Public

    📚A curated list of Awesome LLM/VLM Inference Papers with codes: WINT8/4, FlashAttention, PagedAttention, Parallelism, MLA, etc.

    xlite-dev/Awesome-LLM-Inference’s past year of commit activity
    Python 4,087 GPL-3.0 282 1 1 Updated Jun 5, 2025
  • Awesome-DiT-Inference Public

    📚A curated list of Awesome Diffusion Inference Papers with codes: Sampling, Caching, Multi-GPUs, etc. 🎉🎉

    xlite-dev/Awesome-DiT-Inference’s past year of commit activity
    255 GPL-3.0 15 0 0 Updated Jun 3, 2025
  • LeetCUDA Public

    📚LeetCUDA: Modern CUDA Learn Notes with PyTorch for Beginners🐑, 200+ CUDA/Tensor Cores Kernels, HGEMM, FA-2 MMA etc.🔥

    xlite-dev/LeetCUDA’s past year of commit activity
    Cuda 4,650 GPL-3.0 491 3 0 Updated Jun 3, 2025
  • lite.ai.toolkit Public

    🛠 A lite C++ AI toolkit: 100+🎉 models (Stable-Diffusion, FaceFusion, YOLO series, Det, Seg, Matting) with MNN, ORT and TensorRT.

    xlite-dev/lite.ai.toolkit’s past year of commit activity
    C++ 4,118 GPL-3.0 743 0 0 Updated May 30, 2025
  • SpargeAttn Public Forked from thu-ml/SpargeAttn

    SpargeAttention: A training-free sparse attention that can accelerate any model inference.

    xlite-dev/SpargeAttn’s past year of commit activity
    Cuda 6 Apache-2.0 40 0 0 Updated May 24, 2025
  • SageAttention Public Forked from thu-ml/SageAttention

    Quantized Attention that achieves speedups of 2.1-3.1x and 2.7-5.1x compared to FlashAttention2 and xformers, respectively, without lossing end-to-end metrics across various models.

    xlite-dev/SageAttention’s past year of commit activity
    Cuda 0 Apache-2.0 113 0 0 Updated May 24, 2025
  • lihang-notes Public

    📚《统计学习方法-李航: 笔记-从原理到实现》 这是一份非常详细的学习笔记,200页PDF,各种手推公式细节讲解以及R语言实现. 🎉

    xlite-dev/lihang-notes’s past year of commit activity
    Shell 463 GPL-3.0 57 2 0 Updated May 17, 2025
  • .github Public
    xlite-dev/.github’s past year of commit activity
    1 0 0 0 Updated May 17, 2025
  • HGEMM Public

    ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.

    xlite-dev/HGEMM’s past year of commit activity
    Cuda 79 GPL-3.0 4 0 0 Updated May 10, 2025
  • ffpa-attn Public

    📚FFPA(Split-D): Extend FlashAttention with Split-D for large headdim, O(1) GPU SRAM complexity, 1.8x~3x↑🎉 faster than SDPA EA.

    xlite-dev/ffpa-attn’s past year of commit activity
    Cuda 184 GPL-3.0 8 3 0 Updated May 10, 2025