Skip to main content

Trace

Complete observability with environment isolation, session tracking, and custom metadata

Recall

Scalable semantic caching for significant cost and latency savings at scale

Resilience (Coming Soon)

Automatic model fallbacks for improved reliability

Adapt (Coming Soon)

Smart model selection and cost optimization

Supported Providers

Observ works seamlessly with all major AI providers:
  • Anthropic - Claude models including Opus 4.5
  • OpenAI - All GPT models including 5.1
  • Mistral - Mistral Large and other Mistral models
  • Google - Gemini models (Python SDK)
  • xAI - Grok models
  • OpenRouter - Access to 100+ models through a single interface
With Vercel AI SDK integration, you can access 25+ providers through a unified API.

Two Ways to Use Observ

Vercel AI SDK

Use Vercel AI SDK’s unified API for seamless multi-provider support with automatic provider switching

Provider SDKs

Wrap your existing provider SDK clients (Anthropic, OpenAI, etc.) directly for drop-in observability

Key Features

Session Tracking

Group related LLM calls together using session IDs - perfect for tracking multi-turn conversations and analyzing user workflows.

Learn about Sessions

Track conversation threads and multi-step workflows

Custom Metadata

Attach custom key-value pairs to traces for powerful filtering, debugging, and analytics in your dashboard.

Learn about Metadata

Add context to your traces with custom metadata

Recall

Enable intelligent caching with recall: true to automatically cache similar prompts and reduce token costs by up to 85% and achieve up to 360x latency reduction.

Next Steps

Quick Start

Get started in under 5 minutes

Vercel AI SDK

Use with Vercel AI SDK for 25+ providers

Provider SDKs

Wrap your existing SDK clients

Dashboard

View your traces and analytics