We are currently invoking LLMs through ChatLiteLLM: the main Langchain wrapper for basic usage of LiteLLM https://python.langchain.com/docs/integrations/chat/litellm/
Our basic usage looks like:
from langchain_litellm import ChatLiteLLM
model = ChatLiteLLM(
model_name="litellm_proxy/gemma3",
api_key=...,
api_base=....,
)
The enhancement will help us to leverage LiteLLM's capabilities at GraphRAG-SDK.