Skip to main content

1. Get your API Key

Create an account and get your API key from Settings → API Keys.

2. Choose Your Integration

Quick Example: Vercel AI SDK

1

Install dependencies

npm install observ-sdk ai @ai-sdk/openai
2

Wrap your model

import { Observ } from "observ-sdk";
import { openai } from "@ai-sdk/openai";
import { generateText } from "ai";

const observ = new Observ({
  apiKey: "your-observ-api-key",
  recall: true, // Enable semantic caching
});

// Wrap the model
const model = observ.wrap(openai("gpt-4"));
3

Use it normally

const result = await generateText({
  model,
  prompt: "What is TypeScript?",
});

console.log(result.text);
4

View traces in dashboard

All your LLM calls are now automatically traced! View them in the Dashboard.

Quick Example: Provider SDKs

1

Install dependencies

npm install observ-sdk @anthropic-ai/sdk
2

Wrap your client

import Anthropic from "@anthropic-ai/sdk";
import Observ from "observ-sdk";

const ob = new Observ({
  apiKey: "your-observ-api-key",
  recall: true,
});

const client = new Anthropic({ apiKey: "your-anthropic-key" });
const wrappedClient = ob.anthropic(client);
3

Use it normally

const response = await wrappedClient.messages.create({
  model: "claude-sonnet-4-20250514",
  max_tokens: 1024,
  messages: [{ role: "user", content: "Hello!" }],
});

console.log(response.content[0].text);

What’s Next?

Set recall: true to enable semantic caching and reduce costs by up to 85% on similar prompts.