Basic Setup
The setup is the same for all providers:
- Import the Observ SDK and your chosen provider
- Create an Observ instance with your API key
- Wrap the model with
observ.wrap()
- Use the model with Vercel AI SDK functions
Provider Examples
OpenAI
Anthropic
Google
Mistral
Cohere
import { Observ } from "observ-sdk";
import { openai } from "@ai-sdk/openai";
import { generateText } from "ai";
const observ = new Observ({
apiKey: "your-observ-api-key",
recall: true, // Enable semantic caching
});
// Wrap the model
const model = observ.wrap(openai("gpt-4"));
// Use it normally
const result = await generateText({
model,
prompt: "What is TypeScript?",
});
console.log(result.text);
import { Observ } from "observ-sdk";
import { anthropic } from "@ai-sdk/anthropic";
import { generateText } from "ai";
const observ = new Observ({
apiKey: "your-observ-api-key",
recall: true,
});
const model = observ.wrap(anthropic("claude-sonnet-4-20250514"));
const result = await generateText({
model,
prompt: "What is TypeScript?",
});
console.log(result.text);
import { Observ } from "observ-sdk";
import { google } from "@ai-sdk/google";
import { generateText } from "ai";
const observ = new Observ({
apiKey: "your-observ-api-key",
recall: true,
});
const model = observ.wrap(google("gemini-1.5-pro"));
const result = await generateText({
model,
prompt: "What is TypeScript?",
});
console.log(result.text);
import { Observ } from "observ-sdk";
import { mistral } from "@ai-sdk/mistral";
import { generateText } from "ai";
const observ = new Observ({
apiKey: "your-observ-api-key",
recall: true,
});
const model = observ.wrap(mistral("mistral-large-latest"));
const result = await generateText({
model,
prompt: "What is TypeScript?",
});
console.log(result.text);
import { Observ } from "observ-sdk";
import { cohere } from "@ai-sdk/cohere";
import { generateText } from "ai";
const observ = new Observ({
apiKey: "your-observ-api-key",
recall: true,
});
const model = observ.wrap(cohere("command-r-plus"));
const result = await generateText({
model,
prompt: "What is TypeScript?",
});
console.log(result.text);
Streaming
Streaming works automatically with all providers:
import { streamText } from "ai";
// Use any wrapped model
const stream = await streamText({
model, // Your wrapped model
prompt: "Write a detailed explanation of async/await",
});
// Stream chunks to the client
for await (const chunk of stream.textStream) {
process.stdout.write(chunk);
}
Streaming requests are fully traced in Observ, including latency metrics for
each chunk.
You can add session tracking and custom metadata via providerOptions:
Session Tracking
import { generateText } from "ai";
const result = await generateText({
model, // Your wrapped model
prompt: "Explain React hooks",
providerOptions: {
observ: {
sessionId: "conversation_abc123",
},
},
});
import { generateText } from "ai";
const result = await generateText({
model, // Your wrapped model
prompt: "Explain React hooks",
providerOptions: {
observ: {
metadata: {
user_id: "user_123",
feature: "documentation",
version: "2.0",
},
},
},
});
Combined
const result = await generateText({
model,
prompt: "Explain React hooks",
providerOptions: {
observ: {
sessionId: "conversation_abc123",
metadata: {
user_id: "user_123",
feature: "documentation",
},
},
},
});
Configuration Options
When creating the Observ instance, you can configure:
| Option | Type | Default | Description |
|---|
apiKey | string | — | Your Observ API key (required) |
recall | boolean | false | Enable semantic caching |
environment | string | "production" | Environment tag for filtering |
debug | boolean | false | Enable debug logging |
Example with all options:
const observ = new Observ({
apiKey: "your-observ-api-key",
recall: true,
environment: "staging",
debug: true,
});
Next Steps