Skip to main content

What are Sessions?

Sessions allow you to group related LLM calls under a single identifier. This is useful for:
  • Multi-turn conversations - Track the entire conversation flow
  • User workflows - Analyze how users interact with your AI features
  • Debugging - Trace issues across multiple related calls
  • Analytics - Understand conversation patterns and user behavior

Use Cases

Chatbots

Group all messages in a conversation thread together

Multi-step Workflows

Track related API calls in complex workflows

A/B Testing

Compare conversation flows across different experiments

User Journeys

Analyze how users navigate your AI features

Vercel AI SDK

Pass session IDs via providerOptions.observ.sessionId:
import { generateText } from "ai";

// Group related calls with a session ID
const result = await generateText({
  model, // Your wrapped model
  prompt: "Explain React hooks",
  providerOptions: {
    observ: {
      sessionId: "conversation_abc123",
    },
  },
});

// All calls with the same sessionId will be grouped
// in your Observ dashboard for easy tracking

Provider SDKs

Chain .withSessionId() (TypeScript) or .with_session_id() (Python) before your API call.

Multi-turn Conversation Example

// Group related calls with a session ID
const sessionId = "conversation_abc123";

// First message
const response1 = await wrappedClient.messages
  .withSessionId(sessionId)
  .create({
    model: "claude-sonnet-4-20250514",
    max_tokens: 1024,
    messages: [{ role: "user", content: "What is TypeScript?" }],
  });

// Follow-up in the same session
const response2 = await wrappedClient.messages
  .withSessionId(sessionId)
  .create({
    model: "claude-sonnet-4-20250514",
    max_tokens: 1024,
    messages: [
      { role: "user", content: "What is TypeScript?" },
      { role: "assistant", content: response1.content[0].text },
      { role: "user", content: "How does it compare to JavaScript?" },
    ],
  });

All Providers

Sessions work with all supported providers:
const response = await wrappedClient.messages
  .withSessionId("conversation_abc123")
  .create({
    model: "claude-sonnet-4-20250514",
    max_tokens: 1024,
    messages: [{ role: "user", content: "Hello!" }],
  });
const response = await wrappedClient.chat.completions
  .withSessionId("conversation_abc123")
  .create({
    model: "gpt-4",
    messages: [{ role: "user", content: "Hello!" }],
  });
const response = await wrappedClient.chat.completions
  .withSessionId("conversation_abc123")
  .create({
    model: "mistral-large-latest",
    messages: [{ role: "user", content: "Hello!" }],
  });
const response = await wrappedClient.chat.completions
  .withSessionId("conversation_abc123")
  .create({
    model: "grok-beta",
    messages: [{ role: "user", content: "Hello!" }],
  });
const response = await wrappedClient.chat.completions
  .withSessionId("conversation_abc123")
  .create({
    model: "anthropic/claude-3.5-sonnet",
    messages: [{ role: "user", content: "Hello!" }],
  });

Best Practices

Use the same session ID across all calls in a conversation thread. Generate unique IDs per conversation (e.g., UUIDs, user IDs + timestamp).
// Good - consistent ID per conversation
const sessionId = `user_${userId}_${timestamp}`;

// Bad - random ID per call
const sessionId = Math.random().toString();
Make session IDs meaningful for easier debugging:
// Good - includes context
const sessionId = `support_chat_user123_20240108`;

// Bad - opaque ID
const sessionId = "abc123";
Session IDs are just strings - they don’t consume resources. However, for privacy, you may want to rotate them periodically.

Viewing Sessions in Dashboard

All traces with the same session ID are grouped together in the Dashboard. You can:
  • View the entire conversation flow
  • See timing and cost for each call
  • Filter by session ID
  • Analyze conversation patterns
Use consistent session IDs for conversation threads. View grouped traces in the Dashboard.

Next Steps