Skip to content

Conversation

@lzhangzz
Copy link
Collaborator

@lzhangzz lzhangzz commented Dec 19, 2025

  • Full asynchronous model execution, the execution stream never sync with host explicitly
  • Modular design, instead of handling every buffer in the engine, modules manage their own stuff
  • Batched copy for faster data movement

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR performs a major refactoring of the turbomind engine architecture with the following key changes:

  • Replaces LlamaTritonModel with a new TurboMind class providing a cleaner API
  • Removes the old batch processing implementation (LlamaBatch, LlamaV2)
  • Introduces new model abstractions: LanguageModel, InputProcessor, and OutputProcessor to better separate concerns
  • Updates RequestMetrics fields to use atomic operations for thread-safe access
  • Consolidates model-related code into a unified models CMake target

Reviewed changes

Copilot reviewed 102 out of 102 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
src/turbomind/utils/metrics.h Changed metric fields to atomic types and fixed typo in field name
src/turbomind/turbomind.h/cc New TurboMind class interface replacing LlamaTritonModel
src/turbomind/triton_backend/llama/* Removed old triton backend files
src/turbomind/python/bind.cpp Updated Python bindings to use new TurboMind class
src/turbomind/models/language_model.* New LanguageModel abstraction for inference
src/turbomind/models/input_processor.* New component for handling input processing
src/turbomind/models/output_processor.* New component for handling output processing
src/turbomind/models/llama/unified_decoder.* Updated to work with new architecture
src/turbomind/models/llama/unified_attention_layer.* Refactored attention layer implementation
src/turbomind/models/llama/llama_utils.cu Changed isTuning() from thread_local to static
src/turbomind/layers/sampling_layers/* Removed old sampling layer files
src/turbomind/kernels/sampling_kernels.h Changed sampled_indexes/nums types from uint32_t to int

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants