What's Changed
This release adds support for streaming LLM content through output rails for more responsive, real-time interactions . It also includes a new integration with Prompt Security, as well as updates to our integrations with ActiveFence and Private AI.
🚀 Features
- Support Output Rails Streaming (#966, #1003) by @Pouyanpi
- Add unified output mapping for actions (#965) by @Pouyanpi
- Add output rails support to activefence integration (#940) by @noamlevy81
- Add Prompt Security integration (#920) by @lior-ps
- Add pii masking capability to PrivateAI integration (#901) by @letmerecall
- Add embedding_params to BasicEmbeddingsIndex (#898) by @Pouyanpi
- Add score threshold to AnalyzerEngine (#845) by @Pouyanpi
🐛 Bug Fixes
- Fix dependency resolution issues in AlignScore Dockerfile(#1002, #982) by @Pouyanpi
- Fix JailbreakDetect docker files(#981, #1001) by @erickgalinkin
- Fix TypeError from attempting to unpack already-unpacked dictionary (#959) by @erickgalinkin
- Fix token stats usage in LLM call info (#953) by @trebedea
- Handle unescaped quotes in generate_value using safe_eval (#946) by @milk333445
- Handle non-relative file paths (#897) by @Pouyanpi
📚 Documentation
- Output streaming (#976) by @mikemckiernan
- Fix typos with oauthtoken (#957) by @Pouyanpi
- Fix broken link in prompt security (#978) by @lior-ps
- Update advanced user guides per v0.11.1 doc release (#937) by @Pouyanpi
⚙️ Miscellaneous Tasks
- Tolerate prompt in code blocks (#1004) by @mikemckiernan
- Update YAML indent to use two spaces (#1009) by @mikemckiernan
New Contributors
- @milk333445 made their first contribution in #946
- @lior-ps made their first contribution in #920
Full Changelog: v0.11.1...v0.12.0