You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ChatVertexAI inherits from BaseChatModel which accepts a standard property, max_tokens, from the documentation.
When initializing a ChatVertexAI instance with this standard property, it is not available on the instance and throws an error upon accessing. It is instead available under max_output_tokens.
It should be made available on the instance as the other chat model integrations (ChatOpenAI, ChatAnthropic, ...) all have this property available on the instance.
model=ChatVertexAI(
model="gemini-1.5-pro-002",
temperature=0.0,
max_tokens=500,
top_p=1.0,
)
print(model.max_output_tokens) # 500print(model.max_tokens). # AttributeError: 'ChatVertexAI' object has no attribute 'max_tokens'
Is there a reason for the newly added max_tokens property not to have a setter?
There are cases when you want to change the max_tokens attribute in subsequent calls, for example when you are interested only in a "Yes/No" answer without any longer explanations. This happens in Nemo Guardrails and VertexAI in one of the few Langchain providers that does support this: NVIDIA/NeMo-Guardrails#993
ChatVertexAI inherits from BaseChatModel which accepts a standard property,
max_tokens
, from the documentation.When initializing a ChatVertexAI instance with this standard property, it is not available on the instance and throws an error upon accessing. It is instead available under
max_output_tokens
.It should be made available on the instance as the other chat model integrations (ChatOpenAI, ChatAnthropic, ...) all have this property available on the instance.
LangChain package versions:
langchain-core==0.3.15
langchain-google-vertexai==2.0.7
The text was updated successfully, but these errors were encountered: