[Feature]integrate LazyLLM framework#161
[Feature]integrate LazyLLM framework#161uestcsh917-art wants to merge 3 commits intolintsinghua:v3.0.0from
Conversation
|
Someone is attempting to deploy a commit to the tsinghuaiiilove-2257's projects Team on Vercel. A member of the Team first needs to authorize it. |
Review Summary by QodoIntegrate LazyLLM framework for expanded LLM provider support
WalkthroughsDescription• Integrate LazyLLM framework to support additional LLM providers - Adds unified access to SiliconFlow, SenseNova, AIPING, and PPIO platforms • Create LazyLLMAdapter with provider mapping and environment configuration • Update factory to prioritize LiteLLM, fallback to LazyLLM for unsupported providers • Add comprehensive test suite for LazyLLM adapter functionality Diagramflowchart LR
Config["LLM Config<br/>with Provider"] -->|Provider Type| Factory["LLM Factory<br/>_instantiate_adapter"]
Factory -->|Native Only| Native["Native Adapters<br/>Baidu/MiniMax/Doubao"]
Factory -->|LiteLLM Support| LiteLLM["LiteLLMAdapter<br/>OpenAI/Claude/etc"]
Factory -->|LazyLLM Support| LazyLLM["LazyLLMAdapter<br/>SiliconFlow/SenseNova/etc"]
LazyLLM -->|OnlineChatModule| LazyLLMLib["LazyLLM Library<br/>Multiple Providers"]
Native --> Response["LLMResponse"]
LiteLLM --> Response
LazyLLM --> Response
File Changes1. backend/app/core/config.py
|
Code Review by Qodo
1. user_input empty on non-user
|
| # 分离最后一条用户消息和历史 | ||
| user_input = messages[-1].content if messages[-1].role == "user" else "" | ||
|
|
There was a problem hiding this comment.
1. user_input empty on non-user 📘 Rule violation ⛯ Reliability
• The adapter silently sets user_input to an empty string when the last message role is not user, and then proceeds to call the provider. • This is a missing edge-case guard that can produce invalid requests, confusing model behavior, or hard-to-debug failures when message ordering is unexpected. • Robust handling should either enforce a valid last user message or explicitly error with actionable context.
Agent Prompt
## Issue description
`LazyLLMAdapter._send_request()` currently sets `user_input` to an empty string when the last message is not a `user` message, then still calls the provider. This is an unhandled edge case that can generate invalid prompts and unpredictable results.
## Issue Context
The adapter expects the last message to be the user input (as described in comments), but the code does not enforce it. This should fail fast with an actionable error or implement a safe fallback strategy.
## Fix Focus Areas
- backend/app/services/llm/adapters/lazyllm_adapter.py[141-159]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| def _setup_environment(self): | ||
| """设置 LazyLLM 所需的环境变量""" | ||
| env_key = f"LAZYLLM_{self._source.upper()}_API_KEY" | ||
| provider_env_key = self.PROVIDER_ENV_MAP.get(self.config.provider) | ||
| candidate_key = os.getenv(provider_env_key) if provider_env_key else None | ||
|
|
||
| if not candidate_key: | ||
| candidate_key = self.config.api_key | ||
|
|
||
| if candidate_key: | ||
| os.environ[env_key] = candidate_key | ||
|
|
||
| # SenseNova 额外 secret_key | ||
| if self.config.provider == LLMProvider.SENSENOVA: | ||
| headers = self.config.custom_headers or {} | ||
| secret_key = headers.get("secret_key") or os.getenv("SENSENOVA_SECRET_KEY") | ||
| if secret_key: | ||
| os.environ["LAZYLLM_SENSENOVA_SECRET_KEY"] = secret_key |
There was a problem hiding this comment.
2. Env var key leakage 🐞 Bug ⛨ Security
• LazyLLMAdapter writes API keys (and SenseNova secret key) into process-wide os.environ, which is shared across concurrent requests and users. • Because adapters are globally cached by LLMFactory, a key set for one request can be reused by another request (or overwritten mid-flight), causing cross-tenant credential leakage and wrong-account billing. • Secrets are never cleared, so credentials persist in-process longer than necessary and may affect later requests.
Agent Prompt
### Issue description
LazyLLMAdapter currently configures credentials by writing API keys/secret keys into `os.environ`. This is process-global and can leak credentials across concurrent requests/users and cause requests to run under the wrong key.
### Issue Context
- `LLMService` builds `LLMConfig` from per-user settings.
- `LLMFactory` caches adapters globally.
- LazyLLMAdapter writes credentials into `os.environ` and does not restore/clear them.
### Fix Focus Areas
- backend/app/services/llm/adapters/lazyllm_adapter.py[80-97]
- backend/app/services/llm/adapters/lazyllm_adapter.py[99-116]
- backend/app/services/llm/factory.py[31-47]
- backend/app/services/llm/factory.py[95-98]
### Suggested approach
1. Prefer a non-env mechanism:
- If LazyLLM supports passing API keys/headers directly, use that instead of `os.environ`.
2. If env vars are required:
- Wrap the `module(...)` call with a helper that **saves previous env values**, sets required keys, and **restores them in `finally`**.
- Do this inside the executor thread to localize side effects as much as possible.
- Clear any LazyLLM-related env vars in `close()`.
3. Reduce cross-user reuse:
- Update adapter caching logic for LazyLLMAdapter (or overall) to avoid sharing adapters across different API keys (use a full-key hash, and/or include base_url/custom headers in the cache key).
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
User description
集成LazyLLM框架来接入更多LLM provider,同时兼容现有的大部分LLM provider
PR Type
new feature
Description
lazyllm_adapter.py,通过LazyLLM框架统一接入更多的大模型api,已有的doubao_adapter.py和minimax_adapter.py可以删除,通过lazyllm_adapter.py统一调用,同时新增了siliconflow,PPIO,AIPING平台的接入。test_lazyllm_adapter.py。