Drop-in replacement for OpenAI
Abso provides a unified interface for calling various LLMs while maintaining full type safety.
- OpenAI-compatible API 🔁 (drop in replacement)
- Call any LLM provider (OpenAI, Anthropic, Groq, Ollama, etc.)
- Lightweight & Fast ⚡
- Embeddings support 🧮
- Unified tool calling 🛠️
- Tokenizer and cost calculation (soon) 🔢
- Smart routing (soon)
Provider | Chat | Streaming | Tool Calling | Embeddings | Tokenizer | Cost Calculation |
---|---|---|---|---|---|---|
OpenAI | ✅ | ✅ | ✅ | ✅ | 🚧 | 🚧 |
Anthropic | ✅ | ✅ | ✅ | ❌ | 🚧 | 🚧 |
xAI Grok | ✅ | ✅ | ✅ | ❌ | 🚧 | 🚧 |
Mistral | ✅ | ✅ | ✅ | ❌ | 🚧 | 🚧 |
Groq | ✅ | ✅ | ✅ | ❌ | ❌ | 🚧 |
Ollama | ✅ | ✅ | ✅ | ❌ | ❌ | 🚧 |
OpenRouter | ✅ | ✅ | ✅ | ❌ | ❌ | 🚧 |
Voyage | ❌ | ❌ | ❌ | ✅ | ❌ | ❌ |
Azure | 🚧 | 🚧 | 🚧 | 🚧 | ❌ | 🚧 |
Bedrock | 🚧 | 🚧 | 🚧 | 🚧 | ❌ | 🚧 |
Gemini | ✅ | ✅ | ✅ | ❌ | 🚧 | ❌ |
DeepSeek | ✅ | ✅ | ✅ | ❌ | 🚧 | ❌ |
Perplexity | ✅ | ✅ | ❌ | ❌ | 🚧 | ❌ |
npm install abso-ai
import { abso } from "abso-ai"
const result = await abso.chat.completions.create({
messages: [{ role: "user", content: "Say this is a test" }],
model: "gpt-4o",
})
console.log(result.choices[0].message.content)
Abso tries to infer the correct provider for a given model, but you can also manually select a provider.
const result = await abso.chat.completions.create({
messages: [{ role: "user", content: "Say this is a test" }],
model: "openai/gpt-4o",
provider: "openrouter",
})
console.log(result.choices[0].message.content)
const stream = await abso.chat.completions.create({
messages: [{ role: "user", content: "Say this is a test" }],
model: "gpt-4o",
stream: true,
})
for await (const chunk of stream) {
console.log(chunk)
}
// Helper to get the final result
const fullResult = await stream.finalChatCompletion()
console.log(fullResult)
const embeddings = await abso.embeddings.create({
model: "text-embedding-3-small",
input: ["A cat was playing with a ball on the floor"],
})
console.log(embeddings.data[0].embedding)
const tokens = await abso.chat.tokenize({
messages: [{ role: "user", content: "Hello, world!" }],
model: "gpt-4o",
})
console.log(`${tokens.count} tokens`)
You can also configure built-in providers directly by passing a configuration object with provider names as keys when instantiating Abso:
import { Abso } from "abso-ai"
const abso = new Abso({
openai: { apiKey: "your-openai-key" },
anthropic: { apiKey: "your-anthropic-key" },
// add other providers as needed
})
const result = await abso.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Hello!" }],
})
console.log(result.choices[0].message.content)
Alternatively, you can also change the providers that are loaded by passing a custom providers
array to the constructor.
You can use Abso with Lunary to get instant observability into your LLM usage.
First signup to Lunary and get your public key.
Then simply set the LUNARY_PUBLIC_KEY
environment variable to your public key to enable observability.
See our Contributing Guide.
- More providers
- Built in caching
- Tokenizers
- Cost calculation
- Smart routing