-
Notifications
You must be signed in to change notification settings - Fork 7.8k
Closed
Labels
modelsPertains to using alternate, non-GPT, models (e.g., local models, llama, etc.)Pertains to using alternate, non-GPT, models (e.g., local models, llama, etc.)
Description
Tried this code:
import autogen
from autogen import AssistantAgent, UserProxyAgent
config_list = [
{
"model":"openhermes2.5-mistral",
"base_url": "http://localhost:11434/",
"api_key" : "NULL",
}
]
llm_config = {
# "request_timeout" : 800,
"config_list" : config_list
}
assistant = AssistantAgent(
"assistant",
llm_config = llm_config
)
user_proxy = UserProxyAgent(
"user_proxy",
code_execution_config = {
"work_dir" : "coding"
}
)
user_proxy.initiate_chat(
assistant,
message ="What is the name of the model you are based on?"
)
Got:
_base_client.py", line 885, in _request
raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: 404 page not found
Metadata
Metadata
Assignees
Labels
modelsPertains to using alternate, non-GPT, models (e.g., local models, llama, etc.)Pertains to using alternate, non-GPT, models (e.g., local models, llama, etc.)