Skip to content

AIMessageChunk init_tool_calls failed, when the chunk args is None #30563

Closed
@run-zhi

Description

@run-zhi

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

I'm locally deploying the qwq-32B model using vllm. When running the langgraph_supervisor example, I encountered a problem. When I use the stream(strem_mode="messages") function to output content, the tool call fails. After tracing the code, I found that the tool_call arguments returned by the LLM is None. However, in the init_tool_calls function of the AIMessageChunk class in langchain_core/messages/ai.py, there is no check for the case where args is equal to None, which leads to an error when calling parse_partial_json.

Error Message and Stack Trace (if applicable)

My test example code:

for event in app.stream({
    "messages": [
        {
            "role": "user",
            "content": "what's the combined headcount of the FAANG companies in 2024?"
        }
    ]
},
stream_mode="messages"):
    print(event)

code run output:

....
(AIMessageChunk(content='\n\n', additional_kwargs={}, response_metadata={}, id='run-b94df66e-76d8-4281-9ba8-679e7b5a0868'), {'langgraph_step': 1, 'langgraph_node': 'agent', 'langgraph_triggers': ('start:agent',), 'langgraph_path': ('__pregel_pull', 'agent'), 'langgraph_checkpoint_ns': 'supervisor:6925c4dd-d95f-42c5-ed48-27f85a5b9fda|agent:df014e53-85d6-9661-899c-df412d8e6ace', 'checkpoint_ns': 'supervisor:6925c4dd-d95f-42c5-ed48-27f85a5b9fda', 'ls_provider': 'openai', 'ls_model_name': 'qwq-32b', 'ls_model_type': 'chat', 'ls_temperature': 0.0})
(AIMessageChunk(content='', additional_kwargs={'tool_calls': [{'index': 0, 'id': 'chatcmpl-tool-aa8886dc40bc4cf4941818e667e102e6', 'function': {'arguments': None, 'name': 'transfer_to_research_expert'}, 'type': 'function'}]}, response_metadata={}, id='run-b94df66e-76d8-4281-9ba8-679e7b5a0868', invalid_tool_calls=[{'name': 'transfer_to_research_expert', 'args': None, 'id': 'chatcmpl-tool-aa8886dc40bc4cf4941818e667e102e6', 'error': None, 'type': 'invalid_tool_call'}], tool_call_chunks=[{'name': 'transfer_to_research_expert', 'args': None, 'id': 'chatcmpl-tool-aa8886dc40bc4cf4941818e667e102e6', 'index': 0, 'type': 'tool_call_chunk'}]), {'langgraph_step': 1, 'langgraph_node': 'agent', 'langgraph_triggers': ('start:agent',), 'langgraph_path': ('__pregel_pull', 'agent'), 'langgraph_checkpoint_ns': 'supervisor:6925c4dd-d95f-42c5-ed48-27f85a5b9fda|agent:df014e53-85d6-9661-899c-df412d8e6ace', 'checkpoint_ns': 'supervisor:6925c4dd-d95f-42c5-ed48-27f85a5b9fda', 'ls_provider': 'openai', 'ls_model_name': 'qwq-32b', 'ls_model_type': 'chat', 'ls_temperature': 0.0})
(AIMessageChunk(content='', additional_kwargs={}, response_metadata={'finish_reason': 'tool_calls', 'model_name': 'qwq-32b'}, id='run-b94df66e-76d8-4281-9ba8-679e7b5a0868'), {'langgraph_step': 1, 'langgraph_node': 'agent', 'langgraph_triggers': ('start:agent',), 'langgraph_path': ('__pregel_pull', 'agent'), 'langgraph_checkpoint_ns': 'supervisor:6925c4dd-d95f-42c5-ed48-27f85a5b9fda|agent:df014e53-85d6-9661-899c-df412d8e6ace', 'checkpoint_ns': 'supervisor:6925c4dd-d95f-42c5-ed48-27f85a5b9fda', 'ls_provider': 'openai', 'ls_model_name': 'qwq-32b', 'ls_model_type': 'chat', 'ls_temperature': 0.0})

Image
We can see that the function arguments in the above output is None.

The code in langchain_core/messages/ai.py:

class AIMessageChunk(AIMessage, BaseMessageChunk):
...
    @model_validator(mode="after")
    def init_tool_calls(self) -> Self:
        ...
        for chunk in self.tool_call_chunks:
            try:
                args_ = parse_partial_json(chunk["args"]) if chunk["args"] != "" else {}  # type: ignore[arg-type]
            ...
            except Exception:
                add_chunk_to_invalid_tool_calls(chunk)

When chunk["args"] is None, the parse_partial_json function raises an exception, causing the call flow to be interrupted.

Description

I'm locally deploying the qwq-32B model using vllm. When running the langgraph_supervisor example, I encountered a problem. When I use the stream(strem_mode="messages") function to output content, the tool call fails. After tracing the code, I found that the tool_call arguments returned by the LLM is None. However, in the init_tool_calls function of the AIMessageChunk class in langchain_core/messages/ai.py, there is no check for the case where args is equal to None, which leads to an error when calling parse_partial_json.

System Info

dependencies = [
"beautifulsoup4>=4.13.3",
"langchain-community>=0.3.19",
"langchain-weaviate>=0.0.4",
"langchain[openai]>=0.3.20",
"langgraph>=0.3.15",
"langgraph-supervisor>=0.0.14",
"pypdf>=5.4.0",
"weaviate-client>=4.11.1",
"xinference-client>=1.3.1.post1",
]

Metadata

Metadata

Assignees

No one assigned

    Labels

    investigateFlagged for investigation.Ɑ: coreRelated to langchain-core🤖:bugRelated to a bug, vulnerability, unexpected error with an existing feature

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions