Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AIMessageChunk init_tool_calls failed, when the chunk args is None #30563

Closed
5 tasks done
run-zhi opened this issue Mar 31, 2025 · 1 comment
Closed
5 tasks done

AIMessageChunk init_tool_calls failed, when the chunk args is None #30563

run-zhi opened this issue Mar 31, 2025 · 1 comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature Ɑ: core Related to langchain-core investigate Flagged for investigation.

Comments

@run-zhi
Copy link

run-zhi commented Mar 31, 2025

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

I'm locally deploying the qwq-32B model using vllm. When running the langgraph_supervisor example, I encountered a problem. When I use the stream(strem_mode="messages") function to output content, the tool call fails. After tracing the code, I found that the tool_call arguments returned by the LLM is None. However, in the init_tool_calls function of the AIMessageChunk class in langchain_core/messages/ai.py, there is no check for the case where args is equal to None, which leads to an error when calling parse_partial_json.

Error Message and Stack Trace (if applicable)

My test example code:

for event in app.stream({
    "messages": [
        {
            "role": "user",
            "content": "what's the combined headcount of the FAANG companies in 2024?"
        }
    ]
},
stream_mode="messages"):
    print(event)

code run output:

....
(AIMessageChunk(content='\n\n', additional_kwargs={}, response_metadata={}, id='run-b94df66e-76d8-4281-9ba8-679e7b5a0868'), {'langgraph_step': 1, 'langgraph_node': 'agent', 'langgraph_triggers': ('start:agent',), 'langgraph_path': ('__pregel_pull', 'agent'), 'langgraph_checkpoint_ns': 'supervisor:6925c4dd-d95f-42c5-ed48-27f85a5b9fda|agent:df014e53-85d6-9661-899c-df412d8e6ace', 'checkpoint_ns': 'supervisor:6925c4dd-d95f-42c5-ed48-27f85a5b9fda', 'ls_provider': 'openai', 'ls_model_name': 'qwq-32b', 'ls_model_type': 'chat', 'ls_temperature': 0.0})
(AIMessageChunk(content='', additional_kwargs={'tool_calls': [{'index': 0, 'id': 'chatcmpl-tool-aa8886dc40bc4cf4941818e667e102e6', 'function': {'arguments': None, 'name': 'transfer_to_research_expert'}, 'type': 'function'}]}, response_metadata={}, id='run-b94df66e-76d8-4281-9ba8-679e7b5a0868', invalid_tool_calls=[{'name': 'transfer_to_research_expert', 'args': None, 'id': 'chatcmpl-tool-aa8886dc40bc4cf4941818e667e102e6', 'error': None, 'type': 'invalid_tool_call'}], tool_call_chunks=[{'name': 'transfer_to_research_expert', 'args': None, 'id': 'chatcmpl-tool-aa8886dc40bc4cf4941818e667e102e6', 'index': 0, 'type': 'tool_call_chunk'}]), {'langgraph_step': 1, 'langgraph_node': 'agent', 'langgraph_triggers': ('start:agent',), 'langgraph_path': ('__pregel_pull', 'agent'), 'langgraph_checkpoint_ns': 'supervisor:6925c4dd-d95f-42c5-ed48-27f85a5b9fda|agent:df014e53-85d6-9661-899c-df412d8e6ace', 'checkpoint_ns': 'supervisor:6925c4dd-d95f-42c5-ed48-27f85a5b9fda', 'ls_provider': 'openai', 'ls_model_name': 'qwq-32b', 'ls_model_type': 'chat', 'ls_temperature': 0.0})
(AIMessageChunk(content='', additional_kwargs={}, response_metadata={'finish_reason': 'tool_calls', 'model_name': 'qwq-32b'}, id='run-b94df66e-76d8-4281-9ba8-679e7b5a0868'), {'langgraph_step': 1, 'langgraph_node': 'agent', 'langgraph_triggers': ('start:agent',), 'langgraph_path': ('__pregel_pull', 'agent'), 'langgraph_checkpoint_ns': 'supervisor:6925c4dd-d95f-42c5-ed48-27f85a5b9fda|agent:df014e53-85d6-9661-899c-df412d8e6ace', 'checkpoint_ns': 'supervisor:6925c4dd-d95f-42c5-ed48-27f85a5b9fda', 'ls_provider': 'openai', 'ls_model_name': 'qwq-32b', 'ls_model_type': 'chat', 'ls_temperature': 0.0})

Image
We can see that the function arguments in the above output is None.

The code in langchain_core/messages/ai.py:

class AIMessageChunk(AIMessage, BaseMessageChunk):
...
    @model_validator(mode="after")
    def init_tool_calls(self) -> Self:
        ...
        for chunk in self.tool_call_chunks:
            try:
                args_ = parse_partial_json(chunk["args"]) if chunk["args"] != "" else {}  # type: ignore[arg-type]
            ...
            except Exception:
                add_chunk_to_invalid_tool_calls(chunk)

When chunk["args"] is None, the parse_partial_json function raises an exception, causing the call flow to be interrupted.

Description

I'm locally deploying the qwq-32B model using vllm. When running the langgraph_supervisor example, I encountered a problem. When I use the stream(strem_mode="messages") function to output content, the tool call fails. After tracing the code, I found that the tool_call arguments returned by the LLM is None. However, in the init_tool_calls function of the AIMessageChunk class in langchain_core/messages/ai.py, there is no check for the case where args is equal to None, which leads to an error when calling parse_partial_json.

System Info

dependencies = [
"beautifulsoup4>=4.13.3",
"langchain-community>=0.3.19",
"langchain-weaviate>=0.0.4",
"langchain[openai]>=0.3.20",
"langgraph>=0.3.15",
"langgraph-supervisor>=0.0.14",
"pypdf>=5.4.0",
"weaviate-client>=4.11.1",
"xinference-client>=1.3.1.post1",
]

@langcarl langcarl bot added the investigate Flagged for investigation. label Mar 31, 2025
@dosubot dosubot bot added Ɑ: core Related to langchain-core 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature labels Mar 31, 2025
@run-zhi run-zhi closed this as completed Apr 1, 2025
@npuichigo
Copy link

@run-zhi how did you fix that?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature Ɑ: core Related to langchain-core investigate Flagged for investigation.
Projects
None yet
Development

No branches or pull requests

2 participants