You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to create an Agent that describes the content of an image. When doing so, I noticed that the Agent uses the text property type in the request and sends the raw image data, which is unexpected.
Instead of using the text property, the Agent should use the dedicated property {type: "image_url", {detail: "high", url: imageAsBase64String }} to make the request. See link for more information.
Am I doing something wrong, or maybe they don't support images yet? When looking at the roadmap, I noticed it's not listed @andrew-lastmile.
The text was updated successfully, but these errors were encountered:
Hi @animanathome, could you clarify how you’re passing the image to the agent? Are you attempting to return the image as a tool response from an MCP server? If so, one potential blocker is that OpenAI’s tool message currently support only text content. This means you cannot directly pass an image back to the LLM using a tool.
Uh oh!
There was an error while loading. Please reload this page.
I'm trying to create an Agent that describes the content of an image. When doing so, I noticed that the Agent uses the
text
property type in the request and sends the raw image data, which is unexpected.Instead of using the text property, the Agent should use the dedicated property
{type: "image_url", {detail: "high", url: imageAsBase64String }}
to make the request. See link for more information.Am I doing something wrong, or maybe they don't support images yet? When looking at the roadmap, I noticed it's not listed @andrew-lastmile.
The text was updated successfully, but these errors were encountered: