Skip to content

ollam isn't working with autogen #767

@barshag

Description

@barshag

Tried this code:

import autogen
from autogen import AssistantAgent, UserProxyAgent

config_list = [
    {
        
        "model":"openhermes2.5-mistral",

        "base_url": "http://localhost:11434/",
        "api_key" : "NULL",
        
    }
]

llm_config = {
    # "request_timeout" : 800,
    "config_list" : config_list
}

assistant = AssistantAgent(
    "assistant",
    llm_config = llm_config
)

user_proxy = UserProxyAgent(
    "user_proxy",
    code_execution_config = {
        "work_dir" : "coding"
    }
)

user_proxy.initiate_chat(
    assistant,
    message ="What is the name of the model you are based on?"
)

Got:
_base_client.py", line 885, in _request
raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: 404 page not found

Metadata

Metadata

Assignees

No one assigned

    Labels

    modelsPertains to using alternate, non-GPT, models (e.g., local models, llama, etc.)

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions