Pulldown to select the LLM not working #1765
fmandelbaum
started this conversation in
General
Replies: 1 comment
-
I think bolt.new needs something like a 32K context window. As I recall Ollama defaults to something like 2K.
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello, thanks for all your hard work on running bolt.new locally.
I've followed the instructions on https://www.youtube.com/watch?v=31ivQdydmGg, and after copying my .env.local to /app on the docker container, I have it running with no warnings. My idea is to use a locally running Ollama, and I have (and don't intend to have) any API key for any external AI service.
So, I select Ollama in the pulldown, select my ollama-3.2 "small" model (my laptop hasn't one of those newer M chips...), and I see an error on the UI no matter which prompt I use. Checking the docker logs for the container I see that it's apparently always trying to use the 1st model (Anthropic), no matter which model I set using the pulldowns.
I'd really appreciate if this is fixed, like I said, and like you promote on your videos, I want to do a fully local thing, without the need to have any AI chat with any external API.
Thanks in advance.
Best regards.
Beta Was this translation helpful? Give feedback.
All reactions