-
Notifications
You must be signed in to change notification settings - Fork 48
feat(llma): multimodal-capture #378
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Additional Comments (1)
-
posthog/ai/gemini/gemini_converter.py, line 178 (link)logic: Function
_extract_text_from_partsno longer exists - it was replaced by_format_parts_as_content_blocks
4 files reviewed, 3 comments
|
What about Anthropic and LC? |
Anthropic just uses the sanitisation methods and has no audio processing as of now, so simply disabling sanitisation automatically captures all inline data. I tested it and it displays properly on the frontend. LC is not implemented yet, I've yet to add it to the multimodal testing tools and test around with it. |
Enables the AI SDKs to capture multi modal data. This behaviour is gated under a
_INTERNAL_LLMA_MULTIMODALenvironment variable, which must be set to true, and should not be used for production until the new ingestion pipeline is set up as the default for LLMA events.It is not a release PR on purpose, as it changes no behaviour for end users, only merges this code to keep iterating on it before committing to a public feature release.