Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ChatVertexAI has no property max_tokens. #596

Closed
daniel-deychakiwsky opened this issue Nov 9, 2024 · 2 comments · Fixed by #714
Closed

ChatVertexAI has no property max_tokens. #596

daniel-deychakiwsky opened this issue Nov 9, 2024 · 2 comments · Fixed by #714

Comments

@daniel-deychakiwsky
Copy link

ChatVertexAI inherits from BaseChatModel which accepts a standard property, max_tokens, from the documentation.

When initializing a ChatVertexAI instance with this standard property, it is not available on the instance and throws an error upon accessing. It is instead available under max_output_tokens.

It should be made available on the instance as the other chat model integrations (ChatOpenAI, ChatAnthropic, ...) all have this property available on the instance.

model = ChatVertexAI(
    model="gemini-1.5-pro-002",
    temperature=0.0,
    max_tokens=500,
    top_p=1.0,
)

print(model.max_output_tokens)  # 500
print(model.max_tokens).  # AttributeError: 'ChatVertexAI' object has no attribute 'max_tokens'

LangChain package versions:

langchain-core==0.3.15
langchain-google-vertexai==2.0.7

@lkuligin
Copy link
Collaborator

good point, would you be open to send a PR (please, create an alias for the pydantic field - in the same way we do it for model/model_name).

@trebedea
Copy link

trebedea commented Feb 17, 2025

Is there a reason for the newly added max_tokens property not to have a setter?
There are cases when you want to change the max_tokens attribute in subsequent calls, for example when you are interested only in a "Yes/No" answer without any longer explanations. This happens in Nemo Guardrails and VertexAI in one of the few Langchain providers that does support this: NVIDIA/NeMo-Guardrails#993

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants