Skip to main content

Basic Setup

The pattern is the same for all providers:
  1. Create an Observ instance with your API key
  2. Create your provider client
  3. Wrap it with ob.providerName(client)
  4. Use the wrapped client normally
Set recall: true to enable semantic caching and reduce costs.

Provider Examples

import Anthropic from "@anthropic-ai/sdk";
import Observ from "observ-sdk";

const ob = new Observ({
  apiKey: "your-observ-api-key",
  recall: true, // Enable semantic caching
});

const client = new Anthropic({ apiKey: "your-anthropic-key" });
const wrappedClient = ob.anthropic(client);

const response = await wrappedClient.messages.create({
  model: "claude-sonnet-4-20250514",
  max_tokens: 1024,
  messages: [{ role: "user", content: "Hello!" }],
});

console.log(response.content[0].text);
import OpenAI from "openai";
import Observ from "observ-sdk";

const ob = new Observ({
  apiKey: "your-observ-api-key",
  recall: true,
});

const client = new OpenAI({ apiKey: "your-openai-key" });
const wrappedClient = ob.openai(client);

const response = await wrappedClient.chat.completions.create({
  model: "gpt-4",
  messages: [{ role: "user", content: "Hello!" }],
});

console.log(response.choices[0].message.content);
import { Mistral } from "@mistralai/mistralai";
import Observ from "observ-sdk";

const ob = new Observ({
  apiKey: "your-observ-api-key",
  recall: true,
});

const client = new Mistral({ apiKey: "your-mistral-key" });
const wrappedClient = ob.mistral(client);

const response = await wrappedClient.chat.completions.create({
  model: "mistral-large-latest",
  messages: [{ role: "user", content: "Hello!" }],
});

console.log(response.choices[0].message.content);
import OpenAI from "openai";
import Observ from "observ-sdk";

const ob = new Observ({
  apiKey: "your-observ-api-key",
  recall: true,
});

const client = new OpenAI({
  apiKey: "your-xai-key",
  baseURL: "https://api.x.ai/v1",
});
const wrappedClient = ob.xai(client);

const response = await wrappedClient.chat.completions.create({
  model: "grok-beta",
  messages: [{ role: "user", content: "Hello!" }],
});
import OpenAI from "openai";
import Observ from "observ-sdk";

const ob = new Observ({
  apiKey: "your-observ-api-key",
  recall: true,
});

const client = new OpenAI({
  apiKey: "your-openrouter-key",
  baseURL: "https://openrouter.ai/v1",
});
const wrappedClient = ob.openrouter(client);

const response = await wrappedClient.chat.completions.create({
  model: "anthropic/claude-3.5-sonnet",
  messages: [{ role: "user", content: "Hello!" }],
});

Adding Sessions and Metadata

You can enhance your traces with session tracking and custom metadata.

Session Tracking

Chain .withSessionId() (TypeScript) or .with_session_id() (Python) before your API call:
const sessionId = "conversation_abc123";

const response = await wrappedClient.messages
  .withSessionId(sessionId)
  .create({
    model: "claude-sonnet-4-20250514",
    max_tokens: 1024,
    messages: [{ role: "user", content: "Hello!" }],
  });

Learn More

See the full Session Tracking guide for multi-turn conversation examples

Custom Metadata

Chain .withMetadata() (TypeScript) or .with_metadata() (Python) to attach data:
const response = await wrappedClient.messages
  .withMetadata({
    user_id: "user_123",
    feature: "chat",
    version: "1.0.0"
  })
  .create({
    model: "claude-sonnet-4-20250514",
    max_tokens: 1024,
    messages: [{ role: "user", content: "Hello!" }],
  });

Next Steps