Skip to content

[Feature]integrate LazyLLM framework#161

Open
uestcsh917-art wants to merge 3 commits intolintsinghua:v3.0.0from
uestcsh917-art:feat/integrate-LazyLLM-framework
Open

[Feature]integrate LazyLLM framework#161
uestcsh917-art wants to merge 3 commits intolintsinghua:v3.0.0from
uestcsh917-art:feat/integrate-LazyLLM-framework

Conversation

@uestcsh917-art
Copy link
Copy Markdown

User description

集成LazyLLM框架来接入更多LLM provider,同时兼容现有的大部分LLM provider

PR Type

new feature

Description

  • 新增lazyllm_adapter.py,通过LazyLLM框架统一接入更多的大模型api,已有的doubao_adapter.pyminimax_adapter.py可以删除,通过lazyllm_adapter.py统一调用,同时新增了siliconflow,PPIO,AIPING平台的接入。
  • 新增对应的测试用例test_lazyllm_adapter.py

@vercel
Copy link
Copy Markdown

vercel bot commented Feb 10, 2026

Someone is attempting to deploy a commit to the tsinghuaiiilove-2257's projects Team on Vercel.

A member of the Team first needs to authorize it.

@qodo-free-for-open-source-projects
Copy link
Copy Markdown

Review Summary by Qodo

Integrate LazyLLM framework for expanded LLM provider support

✨ Enhancement

Grey Divider

Walkthroughs

Description
• Integrate LazyLLM framework to support additional LLM providers
  - Adds unified access to SiliconFlow, SenseNova, AIPING, and PPIO platforms
• Create LazyLLMAdapter with provider mapping and environment configuration
• Update factory to prioritize LiteLLM, fallback to LazyLLM for unsupported providers
• Add comprehensive test suite for LazyLLM adapter functionality
Diagram
flowchart LR
  Config["LLM Config<br/>with Provider"] -->|Provider Type| Factory["LLM Factory<br/>_instantiate_adapter"]
  Factory -->|Native Only| Native["Native Adapters<br/>Baidu/MiniMax/Doubao"]
  Factory -->|LiteLLM Support| LiteLLM["LiteLLMAdapter<br/>OpenAI/Claude/etc"]
  Factory -->|LazyLLM Support| LazyLLM["LazyLLMAdapter<br/>SiliconFlow/SenseNova/etc"]
  LazyLLM -->|OnlineChatModule| LazyLLMLib["LazyLLM Library<br/>Multiple Providers"]
  Native --> Response["LLMResponse"]
  LiteLLM --> Response
  LazyLLM --> Response
Loading

Grey Divider

File Changes

1. backend/app/core/config.py ⚙️ Configuration changes +4/-0

Add configuration for new LLM providers

• Add four new API key configuration fields for new LLM providers
• SILICONFLOW_API_KEY, SENSENOVA_API_KEY, AIPING_API_KEY, PPIO_API_KEY

backend/app/core/config.py


2. backend/app/services/llm/types.py ✨ Enhancement +12/-0

Extend LLM provider types and defaults

• Add four new LLMProvider enum values: SILICONFLOW, SENSENOVA, AIPING, PPIO
• Add default models for each new provider in DEFAULT_MODELS dictionary
• Add default base URLs for each new provider in DEFAULT_BASE_URLS dictionary

backend/app/services/llm/types.py


3. backend/app/services/llm/adapters/__init__.py ✨ Enhancement +2/-0

Export LazyLLMAdapter from adapters module

• Import LazyLLMAdapter from lazyllm_adapter module
• Export LazyLLMAdapter in __all__ list
• Update module docstring to document LazyLLM as native adapter option

backend/app/services/llm/adapters/init.py


View more (5)
4. backend/app/services/llm/adapters/lazyllm_adapter.py ✨ Enhancement +236/-0

Implement LazyLLM adapter for unified provider access

• Create new LazyLLMAdapter class extending BaseLLMAdapter
• Implement provider-to-source mapping for 12 LLM providers
• Handle environment variable setup and OnlineChatModule initialization
• Implement async complete() method with message history handling
• Support temperature, max_tokens, and top_p parameters
• Include error handling and provider validation

backend/app/services/llm/adapters/lazyllm_adapter.py


5. backend/app/services/llm/factory.py ✨ Enhancement +14/-2

Add LazyLLM fallback in adapter factory

• Import LazyLLMAdapter in factory module
• Update _instantiate_adapter() with fallback strategy documentation
• Add LazyLLM support as fallback when LiteLLM doesn't support provider
• Maintain priority: native-only providers > LiteLLM > LazyLLM

backend/app/services/llm/factory.py


6. backend/app/services/llm/service.py ✨ Enhancement +13/-0

Update LLM service for new provider support

• Add four new providers to _get_provider_api_key_from_user_config() mapping
• Add four new providers to _get_provider_api_key() mapping
• Add four new providers to _parse_provider() string-to-enum mapping
• Support siliconflowApiKey, sensenovaApiKey, aipingApiKey, ppioApiKey user config keys

backend/app/services/llm/service.py


7. backend/tests/test_lazyllm_adapter.py 🧪 Tests +171/-0

Add LazyLLM adapter test suite

• Create comprehensive test suite with 4 test functions
• Test basic adapter functionality with API key validation
• Test provider support checking for multiple providers
• Test conversation history handling across multiple turns
• Include pytest async support and environment variable handling

backend/tests/test_lazyllm_adapter.py


8. backend/requirements.txt Dependencies +1/-0

Add LazyLLM library dependency

• Add lazyllm>=0.7.3 dependency to LLM Integration section

backend/requirements.txt


Grey Divider

Qodo Logo

@qodo-free-for-open-source-projects
Copy link
Copy Markdown

Code Review by Qodo

🐞 Bugs (2) 📘 Rule violations (3) 📎 Requirement gaps (0)

Grey Divider


Action required

1. user_input empty on non-user 📘 Rule violation ⛯ Reliability
Description
• The adapter silently sets user_input to an empty string when the last message role is not
  user, and then proceeds to call the provider.
• This is a missing edge-case guard that can produce invalid requests, confusing model behavior, or
  hard-to-debug failures when message ordering is unexpected.
• Robust handling should either enforce a valid last user message or explicitly error with
  actionable context.
Code

backend/app/services/llm/adapters/lazyllm_adapter.py[R147-149]

+        # 分离最后一条用户消息和历史
+        user_input = messages[-1].content if messages[-1].role == "user" else ""
+        
Evidence
PR Compliance ID 3 requires explicit handling of edge cases and meaningful failure behavior. The new
adapter sets user_input to "" if the last message is not a user message, which is an unhandled
edge case that can lead to incorrect downstream calls.

Rule 3: Generic: Robust Error Handling and Edge Case Management
backend/app/services/llm/adapters/lazyllm_adapter.py[147-149]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`LazyLLMAdapter._send_request()` currently sets `user_input` to an empty string when the last message is not a `user` message, then still calls the provider. This is an unhandled edge case that can generate invalid prompts and unpredictable results.

## Issue Context
The adapter expects the last message to be the user input (as described in comments), but the code does not enforce it. This should fail fast with an actionable error or implement a safe fallback strategy.

## Fix Focus Areas
- backend/app/services/llm/adapters/lazyllm_adapter.py[141-159]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. Env var key leakage 🐞 Bug ⛨ Security
Description
• LazyLLMAdapter writes API keys (and SenseNova secret key) into process-wide os.environ, which is
  shared across concurrent requests and users.
• Because adapters are globally cached by LLMFactory, a key set for one request can be reused by
  another request (or overwritten mid-flight), causing cross-tenant credential leakage and
  wrong-account billing.
• Secrets are never cleared, so credentials persist in-process longer than necessary and may affect
  later requests.
Code

backend/app/services/llm/adapters/lazyllm_adapter.py[R80-97]

+    def _setup_environment(self):
+        """设置 LazyLLM 所需的环境变量"""
+        env_key = f"LAZYLLM_{self._source.upper()}_API_KEY"
+        provider_env_key = self.PROVIDER_ENV_MAP.get(self.config.provider)
+        candidate_key = os.getenv(provider_env_key) if provider_env_key else None
+
+        if not candidate_key:
+            candidate_key = self.config.api_key
+
+        if candidate_key:
+            os.environ[env_key] = candidate_key
+
+        # SenseNova 额外 secret_key
+        if self.config.provider == LLMProvider.SENSENOVA:
+            headers = self.config.custom_headers or {}
+            secret_key = headers.get("secret_key") or os.getenv("SENSENOVA_SECRET_KEY")
+            if secret_key:
+                os.environ["LAZYLLM_SENSENOVA_SECRET_KEY"] = secret_key
Evidence
LazyLLMAdapter mutates global environment variables to configure credentials. LLMService builds
per-user config (including API keys) and always creates adapters via a globally-cached factory; this
combination makes process-wide env var mutation unsafe under concurrency/multi-user usage.

backend/app/services/llm/adapters/lazyllm_adapter.py[80-97]
backend/app/services/llm/factory.py[31-47]
backend/app/services/llm/service.py[27-78]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
LazyLLMAdapter currently configures credentials by writing API keys/secret keys into `os.environ`. This is process-global and can leak credentials across concurrent requests/users and cause requests to run under the wrong key.

### Issue Context
- `LLMService` builds `LLMConfig` from per-user settings.
- `LLMFactory` caches adapters globally.
- LazyLLMAdapter writes credentials into `os.environ` and does not restore/clear them.

### Fix Focus Areas
- backend/app/services/llm/adapters/lazyllm_adapter.py[80-97]
- backend/app/services/llm/adapters/lazyllm_adapter.py[99-116]
- backend/app/services/llm/factory.py[31-47]
- backend/app/services/llm/factory.py[95-98]

### Suggested approach
1. Prefer a non-env mechanism:
  - If LazyLLM supports passing API keys/headers directly, use that instead of `os.environ`.
2. If env vars are required:
  - Wrap the `module(...)` call with a helper that **saves previous env values**, sets required keys, and **restores them in `finally`**.
  - Do this inside the executor thread to localize side effects as much as possible.
  - Clear any LazyLLM-related env vars in `close()`.
3. Reduce cross-user reuse:
  - Update adapter caching logic for LazyLLMAdapter (or overall) to avoid sharing adapters across different API keys (use a full-key hash, and/or include base_url/custom headers in the cache key).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

3. LLMError leaks str(e) 📘 Rule violation ⛨ Security
Description
• The adapter raises LLMError messages that directly embed the raw exception text via str(e).
• If these errors propagate to user-facing surfaces, they can reveal internal implementation details
  or upstream provider errors, violating secure error handling expectations.
• Detailed exception info should be logged securely, while user-facing error text remains generic.
Code

backend/app/services/llm/adapters/lazyllm_adapter.py[R118-123]

+            except Exception as e:
+                raise LLMError(
+                    f"LazyLLM 模块初始化失败: {str(e)}",
+                    self.config.provider,
+                    original_error=e
+                )
Evidence
PR Compliance ID 4 forbids exposing internal system details through user-facing errors. The new
adapter constructs an LLMError message that includes str(e), and the base error handler uses
str(error) when building the raised LLMError, increasing the chance that raw internal details
are surfaced.

Rule 4: Generic: Secure Error Handling
backend/app/services/llm/adapters/lazyllm_adapter.py[118-123]
backend/app/services/llm/base_adapter.py[68-108]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`LazyLLMAdapter` currently embeds `str(e)` into raised `LLMError` messages. This risks leaking internal/provider details if the exception message is returned to clients.

## Issue Context
Secure error handling requires generic user-facing messages while preserving diagnostic information in internal logs. The code already passes `original_error=e`, which can be retained for debugging without exposing the raw message.

## Fix Focus Areas
- backend/app/services/llm/adapters/lazyllm_adapter.py[118-123]
- backend/app/services/llm/base_adapter.py[60-108]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


4. Tests print LLM responses 📘 Rule violation ⛨ Security
Description
• The new tests use print() to output raw model responses, which is unstructured and can leak
  sensitive content into CI logs.
• The test prompts include personal data (e.g., a person's name), increasing the risk of PII
  exposure in logs.
• Tests should avoid printing raw responses (or must redact) and prefer structured, controlled
  logging only when necessary.
Code

backend/tests/test_lazyllm_adapter.py[60]

+    print(f"响应内容: {response.content}")
Evidence
PR Compliance ID 5 requires secure logging practices and prohibits sensitive data in logs. The new
tests print raw LLM responses and include PII-like prompt content (a person's name), which can end
up in CI/test logs.

Rule 5: Generic: Secure Logging Practices
backend/tests/test_lazyllm_adapter.py[60-60]
backend/tests/test_lazyllm_adapter.py[133-136]
backend/tests/test_lazyllm_adapter.py[143-143]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`backend/tests/test_lazyllm_adapter.py` prints raw LLM responses (and uses prompts containing a person&#x27;s name). This can leak sensitive content/PII into CI logs and is not structured for auditing.

## Issue Context
Even in tests, stdout is commonly captured by CI systems and can be retained. Avoid emitting raw model content unless it is explicitly sanitized/redacted.

## Fix Focus Areas
- backend/tests/test_lazyllm_adapter.py[60-60]
- backend/tests/test_lazyllm_adapter.py[92-92]
- backend/tests/test_lazyllm_adapter.py[133-146]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


5. Flaky live adapter tests 🐞 Bug ⛯ Reliability
Description
• The new tests perform real external LLM calls (network + paid API) and are only gated by presence
  of QWEN_API_KEY, which can make CI/dev runs slow and flaky if the variable is set.
• Assertions like "张三" in response.content depend on non-deterministic model output and may fail
  even when the adapter is functioning.
• The test file also modifies sys.path manually, which can hide import/packaging issues and behave
  differently across runners.
Code

backend/tests/test_lazyllm_adapter.py[R114-146]

+@pytest.mark.asyncio
+async def test_lazyllm_adapter_with_history():
+    """测试带历史对话的请求"""
+    
+    api_key = os.getenv("QWEN_API_KEY")
+    if not api_key:
+        pytest.skip("未设置 QWEN_API_KEY,跳过测试")
+    
+    config = LLMConfig(
+        provider=LLMProvider.QWEN,
+        api_key=api_key,
+        model="qwen-plus",
+    )
+    
+    adapter = LazyLLMAdapter(config)
+    
+    # 包含历史对话的请求
+    request = LLMRequest(
+        messages=[
+            LLMMessage(role="user", content="我的名字是张三"),
+            LLMMessage(role="assistant", content="你好张三,很高兴认识你!"),
+            LLMMessage(role="user", content="我叫什么名字?")
+        ]
+    )
+    
+    response = await adapter.complete(request)
+    
+    # 应该能够记住用户的名字
+    assert "张三" in response.content
+    print(f"历史对话测试响应: {response.content}")
+    
+    await adapter.close()
+
Evidence
The tests call adapter.complete() with real credentials and assert specific natural-language
content. This is inherently non-deterministic and will intermittently fail depending on
provider/model behavior, rate limiting, or transient network issues.

backend/tests/test_lazyllm_adapter.py[8-14]
backend/tests/test_lazyllm_adapter.py[118-143]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
Current LazyLLMAdapter tests are live/integration tests that may become flaky/slow when API keys are present, and they assert on non-deterministic LLM content.

### Issue Context
- Tests call real providers via LazyLLM.
- Output can vary; provider calls can rate-limit or fail transiently.

### Fix Focus Areas
- backend/tests/test_lazyllm_adapter.py[8-14]
- backend/tests/test_lazyllm_adapter.py[17-64]
- backend/tests/test_lazyllm_adapter.py[114-146]

### Suggested approach
1. Make integration tests opt-in:
  - Gate with an env var like `RUN_LIVE_LLM_TESTS=1` (skip otherwise), and/or mark with `@pytest.mark.integration`.
2. Add unit tests:
  - Mock `lazyllm.OnlineChatModule` to return deterministic outputs.
  - Verify message conversion (history/input/system handling) and error paths without network.
3. Stabilize assertions:
  - Avoid checking exact substrings from a model unless using recorded fixtures (VCR-style) or a stubbed response.
4. Remove `sys.path` hacks if possible:
  - Prefer standard packaging/import configuration via pytest settings.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

Comment on lines +147 to +149
# 分离最后一条用户消息和历史
user_input = messages[-1].content if messages[-1].role == "user" else ""

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. user_input empty on non-user 📘 Rule violation ⛯ Reliability

• The adapter silently sets user_input to an empty string when the last message role is not
  user, and then proceeds to call the provider.
• This is a missing edge-case guard that can produce invalid requests, confusing model behavior, or
  hard-to-debug failures when message ordering is unexpected.
• Robust handling should either enforce a valid last user message or explicitly error with
  actionable context.
Agent Prompt
## Issue description
`LazyLLMAdapter._send_request()` currently sets `user_input` to an empty string when the last message is not a `user` message, then still calls the provider. This is an unhandled edge case that can generate invalid prompts and unpredictable results.

## Issue Context
The adapter expects the last message to be the user input (as described in comments), but the code does not enforce it. This should fail fast with an actionable error or implement a safe fallback strategy.

## Fix Focus Areas
- backend/app/services/llm/adapters/lazyllm_adapter.py[141-159]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +80 to +97
def _setup_environment(self):
"""设置 LazyLLM 所需的环境变量"""
env_key = f"LAZYLLM_{self._source.upper()}_API_KEY"
provider_env_key = self.PROVIDER_ENV_MAP.get(self.config.provider)
candidate_key = os.getenv(provider_env_key) if provider_env_key else None

if not candidate_key:
candidate_key = self.config.api_key

if candidate_key:
os.environ[env_key] = candidate_key

# SenseNova 额外 secret_key
if self.config.provider == LLMProvider.SENSENOVA:
headers = self.config.custom_headers or {}
secret_key = headers.get("secret_key") or os.getenv("SENSENOVA_SECRET_KEY")
if secret_key:
os.environ["LAZYLLM_SENSENOVA_SECRET_KEY"] = secret_key
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

2. Env var key leakage 🐞 Bug ⛨ Security

• LazyLLMAdapter writes API keys (and SenseNova secret key) into process-wide os.environ, which is
  shared across concurrent requests and users.
• Because adapters are globally cached by LLMFactory, a key set for one request can be reused by
  another request (or overwritten mid-flight), causing cross-tenant credential leakage and
  wrong-account billing.
• Secrets are never cleared, so credentials persist in-process longer than necessary and may affect
  later requests.
Agent Prompt
### Issue description
LazyLLMAdapter currently configures credentials by writing API keys/secret keys into `os.environ`. This is process-global and can leak credentials across concurrent requests/users and cause requests to run under the wrong key.

### Issue Context
- `LLMService` builds `LLMConfig` from per-user settings.
- `LLMFactory` caches adapters globally.
- LazyLLMAdapter writes credentials into `os.environ` and does not restore/clear them.

### Fix Focus Areas
- backend/app/services/llm/adapters/lazyllm_adapter.py[80-97]
- backend/app/services/llm/adapters/lazyllm_adapter.py[99-116]
- backend/app/services/llm/factory.py[31-47]
- backend/app/services/llm/factory.py[95-98]

### Suggested approach
1. Prefer a non-env mechanism:
   - If LazyLLM supports passing API keys/headers directly, use that instead of `os.environ`.
2. If env vars are required:
   - Wrap the `module(...)` call with a helper that **saves previous env values**, sets required keys, and **restores them in `finally`**.
   - Do this inside the executor thread to localize side effects as much as possible.
   - Clear any LazyLLM-related env vars in `close()`.
3. Reduce cross-user reuse:
   - Update adapter caching logic for LazyLLMAdapter (or overall) to avoid sharing adapters across different API keys (use a full-key hash, and/or include base_url/custom headers in the cache key).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant