# Observ ## Docs - [Custom Metadata](https://docs.observ.dev/features/metadata.md): Attach custom key-value pairs to traces for powerful filtering, debugging, and analytics in your dashboard - [Session Tracking](https://docs.observ.dev/features/sessions.md): Group related LLM calls together using session IDs - perfect for tracking multi-turn conversations and analyzing user workflows - [Welcome to Observ](https://docs.observ.dev/introduction.md): Observ is a unified LLM completion gateway with intelligent semantic caching (Recall), API key management, usage analytics, and comprehensive tracing - [Configuration Options](https://docs.observ.dev/provider-sdks/configuration.md): Configure the Observ SDK when creating an instance to control tracing, caching, and debugging behavior - [Installing Provider SDKs](https://docs.observ.dev/provider-sdks/installation.md): Install provider SDKs with Observ for TypeScript and Python - [Provider SDK Integration](https://docs.observ.dev/provider-sdks/overview.md): Wrap your existing AI provider SDK clients to automatically trace all LLM calls and optionally cache similar prompts with semantic caching - [Using Provider SDKs with Observ](https://docs.observ.dev/provider-sdks/usage.md): Learn how to wrap your provider SDK clients for automatic tracing and caching - [Quick Start](https://docs.observ.dev/quickstart.md): Get Observ up and running in your application a few lines of code - [Installing Vercel AI SDK](https://docs.observ.dev/vercel-ai/installation.md): Follow these steps to install the Vercel AI SDK with Observ - [Vercel AI SDK Integration](https://docs.observ.dev/vercel-ai/overview.md): Use Vercel AI SDK's unified API to work with 25+ AI providers through a single interface, all with automatic Observ tracing and caching - [Using Vercel AI SDK with Observ](https://docs.observ.dev/vercel-ai/usage.md): Learn how to wrap and use Vercel AI SDK models with Observ for automatic tracing and caching ## OpenAPI Specs - [openapi](https://docs.observ.dev/api-reference/openapi.json)