currently VLM functionality exists in separate example
we should instead accept the structured inputs from the catgrad-llm conversation types- both anthropic and openai
q; what should happen if you try to call a model which doesn't support images? preferably errors at template-time
currently VLM functionality exists in separate example
we should instead accept the structured inputs from the catgrad-llm conversation types- both anthropic and openai
q; what should happen if you try to call a model which doesn't support images? preferably errors at template-time