Doramagic Project Pack · Human Manual

openlit

Related topics: Quick Start Guide, System Architecture

OpenLIT Overview

Related topics: Quick Start Guide, System Architecture

Section Related Pages

Continue reading this section for the full explanation and source context.

Section Python SDK

Continue reading this section for the full explanation and source context.

Section TypeScript/JavaScript SDK

Continue reading this section for the full explanation and source context.

Section OTLP Endpoint Configuration

Continue reading this section for the full explanation and source context.

Related topics: Quick Start Guide, System Architecture

OpenLIT Overview

What is OpenLIT?

OpenLIT is an OpenTelemetry-native GenAI and LLM Application Observability tool designed to simplify the integration process for sending OpenTelemetry traces and metrics from your LLM applications. It provides comprehensive monitoring capabilities for both GenAI and LLM applications.

Sources: src/client/src/app/(playground)/getting-started/page.tsx:127

Key Features

OpenLIT offers several core capabilities for observability:

Feature CategoryDescription
TracingCapture detailed traces of LLM application requests
MetricsCollect and analyze performance metrics
EvaluationsAssess response quality and model performance
Context ManagementManage evaluation contexts and prompts
Secrets ManagementSecurely store and manage API keys and credentials

Sources: src/client/src/components/(playground)/getting-started/tracing/index.tsx Sources: src/client/src/components/(playground)/getting-started/secrets/index.tsx Sources: src/client/src/components/(playground)/getting-started/prompts/index.tsx

Architecture Overview

graph TD
    A[LLM Application] --> B[OpenLIT SDK]
    B --> C[OTLP Endpoint<br/>127.0.0.1:4318]
    C --> D[OpenLIT Backend]
    D --> E[OpenLIT UI<br/>127.0.0.1:3000]
    F[Database] <--> D

SDK Support

OpenLIT provides official SDKs for multiple programming languages:

Python SDK

The Python SDK enables Python-based LLM applications to send telemetry data to OpenLIT.

import openlit

openlit.init()

Sources: src/client/src/app/(playground)/getting-started/page.tsx

TypeScript/JavaScript SDK

The TypeScript SDK provides similar functionality for Node.js and browser-based applications.

import openlit from 'openlit';

openlit.init({
  otlpEndpoint: "http://127.0.0.1:4318"
});

Example Usage with OpenAI:

import OpenAI from 'openai';
import openlit from 'openlit';

openlit.init({ otlpEndpoint: "http://127.0.0.1:4318" });

const client = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY
});

const chatCompletion = await client.chat.completions.create({
  messages: [{ role: 'user', content: 'What is LLM Observability?' }],
  model: 'gpt-3.5-turbo',
});

Sources: src/client/src/components/(playground)/getting-started/tracing/index.tsx

Configuration Options

OTLP Endpoint Configuration

You can configure the OTLP endpoint in two ways:

MethodConfiguration
Codeopenlit.init({ otlpEndpoint: "http://127.0.0.1:4318" })
Environment VariableOTEL_EXPORTER_OTLP_ENDPOINT="http://127.0.0.1:4318"

Sources: src/client/src/app/(playground)/getting-started/page.tsx

Environment Variables

VariablePurposeDefault Value
OTEL_EXPORTER_OTLP_ENDPOINTOTLP collector endpointhttp://127.0.0.1:4318

Deployment

Docker Compose Deployment

OpenLIT can be deployed using Docker Compose from the root directory:

git clone [email protected]:openlit/openlit.git
docker compose up -d

Sources: src/client/src/app/(playground)/getting-started/page.tsx

Default Ports

ServiceDefault Address
OpenLIT UIhttp://127.0.0.1:3000
OTLP Endpointhttp://127.0.0.1:4318

Default Credentials

After deployment, access the OpenLIT UI using the following default credentials:

FieldDefault Value
Email[email protected]
Passwordopenlituser

Sources: src/client/src/app/(playground)/getting-started/page.tsx

SDK Repository Locations

SDKRepository Path
Python SDKsdk/python
TypeScript SDKsdk/typescript

Sources: src/client/src/app/(playground)/getting-started/page.tsx

Community and Support

OpenLIT maintains active community channels for support and discussions:

PlatformLink
GitHubhttps://github.com/openlit/openlit
Documentationhttps://docs.openlit.io
SlackJoin via invitation link
X (Twitter)@openlit_io

Sources: src/client/README.md

Evaluation Features

OpenLIT supports custom evaluation types with configurable prompts and context:

// Evaluation prompt format example
[Domain Accuracy evaluation context]
Consider: whether the response aligns with domain-specific knowledge and terminology.
Look for incorrect use of domain terms, inaccurate domain-specific claims, and deviations from established domain practices.

Evaluations provide the following metrics:

  • Score: Numerical rating
  • Classification: Categorical classification
  • Explanation: Detailed reasoning
  • Verdict: Pass/fail determination

Sources: src/client/src/app/(playground)/evaluations/types/new/page.tsx Sources: src/client/src/components/(playground)/request/components/evaluations.tsx

Pricing Integration

OpenLIT can calculate costs for LLM usage based on token consumption:

cost = (input_tokens / 1M) × input_price + (output_tokens / 1M) × output_price

This includes:

  • Input token pricing per million tokens
  • Output token pricing per million tokens
  • Context window size tracking

Sources: src/client/src/components/(playground)/chat/chat-settings-form.tsx

Sources: [src/client/src/app/(playground)/getting-started/page.tsx:127]()

Quick Start Guide

Related topics: Python SDK Architecture

Section Related Pages

Continue reading this section for the full explanation and source context.

Section Docker Compose Deployment

Continue reading this section for the full explanation and source context.

Section Controller Deployment

Continue reading this section for the full explanation and source context.

Section Python SDK

Continue reading this section for the full explanation and source context.

Related topics: Python SDK Architecture

Quick Start Guide

OpenLIT is an OpenTelemetry-native GenAI and LLM Application Observability tool designed to simplify the integration of tracing and metrics collection for AI applications. This guide provides comprehensive instructions for deploying OpenLIT and instrumenting your applications using the Python and TypeScript SDKs.

Prerequisites

Before beginning, ensure you have the following installed:

RequirementVersionPurpose
DockerLatestContainer runtime for OpenLIT deployment
Docker ComposeLatestOrchestration tool
Node.js18+Required for TypeScript SDK
Python3.8+Required for Python SDK
npm/pipLatestPackage managers

Deployment Options

OpenLIT can be deployed using multiple methods depending on your infrastructure requirements.

Docker Compose Deployment

The recommended approach for local development and testing is Docker Compose.

git clone [email protected]:openlit/openlit.git
cd openlit
docker compose up -d

Once deployed, access the OpenLIT UI at http://127.0.0.1:3000 using the default credentials:

Sources: src/client/src/app/(playground)/getting-started/page.tsx:50-55

Controller Deployment

For infrastructure-level observability, the OpenLIT Controller can be deployed as a system service or containerized application.

#### Linux System Service

sudo tee /etc/systemd/system/openlit-controller.service <<EOF
[Unit]
Description=OpenLIT Controller
After=network.target

[Service]
Type=simple
User=root
WorkingDirectory=/opt/openlit
ExecStart=/opt/openlit/openlit-controller
Restart=always
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable --now openlit-controller

Sources: src/client/src/app/(playground)/agents/no-controller.tsx:12-25

#### Docker Deployment

docker run -d --privileged --pid=host \
  -e OPENLIT_URL="<openlit-url>" \
  -e OTEL_EXPORTER_OTLP_ENDPOINT="<openlit-url>:4318" \
  -v /proc:/host/proc:ro \
  -v /sys/kernel/debug:/sys/kernel/debug:ro \
  -v /sys/fs/bpf:/sys/fs/bpf:rw \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -e OPENLIT_PROC_ROOT="/host/proc" \
  ghcr.io/openlit/controller:latest

#### Kubernetes Deployment

helm repo add openlit https://openlit.github.io/helm
helm repo update
helm upgrade --install openlit openlit/openlit \
  --set openlit-controller.enabled=true

Sources: src/client/src/app/(playground)/agents/no-controller.tsx:27-45

SDK Integration

OpenLIT provides SDKs for both Python and TypeScript environments to enable application-level observability.

Python SDK

#### Installation

Install the Python SDK using pip:

pip install openlit

Sources: src/client/src/app/(playground)/getting-started/page.tsx:85-92

#### Initialization

Add the following initialization code to your application:

import openlit

openlit.init(otlp_endpoint="http://127.0.0.1:4318")

Alternatively, set the endpoint using the environment variable:

export OTEL_EXPORTER_OTLP_ENDPOINT="http://127.0.0.1:4318"

#### Complete Example with OpenAI

import openlit
from openai import OpenAI

openlit.init(otlp_endpoint="http://127.0.0.1:4318")

client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))

response = client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[{"role": "user", "content": "What is LLM Observability?"}]
)

Sources: src/client/src/components/(playground)/getting-started/tracing/index.tsx:45-65

TypeScript SDK

#### Installation

Install the TypeScript SDK using npm:

npm install openlit

#### Initialization

Add the following initialization code to your application:

import openlit from 'openlit';

openlit.init({
  otlpEndpoint: "http://127.0.0.1:4318"
});

Alternatively, set the endpoint using the environment variable OTEL_EXPORTER_OTLP_ENDPOINT.

#### Complete Example with OpenAI

import OpenAI from 'openai';
import openlit from 'openlit';

openlit.init({ otlpEndpoint: "http://127.0.0.1:4318" });

const client = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY
});

const chatCompletion = await client.chat.completions.create({
  messages: [{ role: 'user', content: 'What is LLM Observability?' }],
  model: 'gpt-3.5-turbo',
});

Sources: src/client/src/components/(playground)/getting-started/tracing/index.tsx:95-120

Configuration Reference

SDK Configuration Options

ParameterTypeEnvironment VariableDescription
otlp_endpointstringOTEL_EXPORTER_OTLP_ENDPOINTOTLP exporter endpoint URL
api_keystringOPENLIT_API_KEYAPI key for authenticated endpoints

Controller Environment Variables

VariableDescription
OPENLIT_URLBase URL for the OpenLIT instance
OTEL_EXPORTER_OTLP_ENDPOINTOTLP endpoint for trace export
OPENLIT_API_KEYAPI key for OpenLIT authentication
OPENLIT_PROC_ROOTRoot path for process information (default: /host/proc)

Application Workflow

graph TD
    A[Deploy OpenLIT with Docker Compose] --> B[Access OpenLIT UI]
    B --> C{Choose Deployment Mode}
    C -->|Local Development| D[Install SDK in Application]
    C -->|System-wide| E[Deploy Controller]
    D --> F[Initialize SDK]
    F --> G[Instrument LLM Calls]
    G --> H[View Traces & Metrics in UI]
    E --> I[Auto-discover Services]
    I --> J[View Infrastructure Metrics]

Additional Resources

For more advanced configurations and use cases, refer to the following repositories:

Sources: src/client/src/app/(playground)/getting-started/page.tsx:100-115 Sources: src/client/src/app/not-found.tsx:20-35

Sources: [src/client/src/app/(playground)/getting-started/page.tsx:50-55]()

System Architecture

Related topics: Data Flow and Management, Python SDK Architecture

Section Related Pages

Continue reading this section for the full explanation and source context.

Section SDK Layer

Continue reading this section for the full explanation and source context.

Section Data Transport Layer

Continue reading this section for the full explanation and source context.

Section Backend Services

Continue reading this section for the full explanation and source context.

Related topics: Data Flow and Management, Python SDK Architecture

System Architecture

Overview

OpenLIT is an OpenTelemetry-native GenAI and LLM Application Observability tool designed to simplify the integration of observability into AI applications. The system enables developers to send OpenTelemetry traces and metrics from their LLM applications with minimal configuration changes.

The architecture follows a distributed microservices pattern with clear separation between data collection (SDK instrumentation), data transmission (OTLP protocol), and data visualization (frontend dashboard).

High-Level Architecture

graph TB
    subgraph "Client Applications"
        PythonApp["Python Application"]
        TypeScriptApp["TypeScript/JS Application"]
    end

    subgraph "OpenLIT SDKs"
        PythonSDK["Python SDK<br/>pip install openlit"]
        TSSDK["TypeScript SDK<br/>npm install openlit"]
    end

    subgraph "Data Transport"
        OTLP["OTLP Endpoint<br/>:4318"]
    end

    subgraph "OpenLIT Backend"
        Frontend["Web Dashboard<br/>Port 3000"]
        API["API Services"]
        DB[( "ClickHouse<br/>Database" )]
    end

    PythonApp --> PythonSDK
    TypeScriptApp --> TSSDK
    PythonSDK --> OTLP
    TSSDK --> OTLP
    OTLP --> API
    API --> DB
    Frontend --> API

Core Components

SDK Layer

OpenLIT provides language-specific SDKs for instrumenting AI applications:

SDKPackage ManagerInstallationRepository
Pythonpippip install openlitsdk/python
TypeScriptnpmnpm install openlitsdk/typescript

Python SDK Initialization

import openlit

openlit.init(otlp_endpoint="http://127.0.0.1:4318")

Sources: src/client/src/app/(playground)/getting-started/page.tsx:73-74

TypeScript SDK Initialization

import openlit from 'openlit';

openlit.init({
  otlpEndpoint: "http://127.0.0.1:4318"
});

Sources: src/client/src/app/(playground)/getting-started/page.tsx:115-118

Data Transport Layer

The system uses the OpenTelemetry Protocol (OTLP) for transmitting telemetry data:

ParameterDefault ValueDescription
OTLP Endpointhttp://127.0.0.1:4318gRPC/HTTP endpoint for traces
Environment VariableOTEL_EXPORTER_OTLP_ENDPOINTAlternative endpoint configuration

The OTLP endpoint can be configured either programmatically via SDK initialization or through environment variables.

Backend Services

#### Web Dashboard (Frontend)

The frontend is a Next.js application providing the user interface for:

  • Tracing View - Visualize request traces and spans
  • Agents Management - Configure and monitor AI agents
  • Model Management - Configure AI model providers and pricing
  • Getting Started - Onboarding documentation
  • Chat Interface - Interactive testing environment

The application runs on port 3000 by default and provides a login interface with default credentials:

Sources: src/client/src/app/(playground)/getting-started/page.tsx:40-44

#### Agent Lifecycle Management

OpenLIT supports managing AI agents with lifecycle operations:

stateDiagram-v2
    [*] --> Starting
    Starting --> Running
    Running --> Restarting
    Restarting --> Running
    Running --> Stopping
    Stopping --> [*]

Lifecycle actions include:

  • Start - Initialize the agent service
  • Stop - Terminate with confirmation dialog
  • Restart - Restart the agent process

Sources: src/client/src/app/(playground)/agents/lifecycle-actions.tsx:1-60

Controller Services

The OpenLIT Controller provides infrastructure-level observability for containerized and orchestrated environments:

Deployment MethodCommand/Configuration
Dockerdocker run -d --privileged --pid=host ... ghcr.io/openlit/controller:latest
Kuberneteshelm upgrade --install openlit openlit/openlit --set openlit-controller.enabled=true
SystemdService unit file with systemctl enable

Sources: src/client/src/app/(playground)/agents/no-controller.tsx:45-60

#### Controller Environment Variables

VariablePurpose
OPENLIT_URLMain OpenLIT instance URL
OTEL_EXPORTER_OTLP_ENDPOINTOTLP endpoint for telemetry
OPENLIT_API_KEYAuthentication key (optional)
OPENLIT_PROC_ROOTProcess root for host monitoring

Deployment Architecture

Docker Compose Deployment

For development and testing, OpenLIT can be deployed using Docker Compose:

git clone [email protected]:openlit/openlit.git
cd openlit
docker compose up -d

Sources: src/client/src/app/(playground)/getting-started/page.tsx:50-55

Multi-Platform Support

graph LR
    subgraph "Deployment Platforms"
        Docker["Docker"]
        K8s["Kubernetes"]
        SystemD["Systemd"]
    end

    subgraph "Monitoring Targets"
        Containers["Containers"]
        Processes["Host Processes"]
        Services["System Services"]
    end

    Docker --> Containers
    K8s --> Containers
    K8s --> Services
    SystemD --> Services
    SystemD --> Processes

Feature Architecture

Tracing Integration

OpenLIT's tracing feature provides comprehensive observability:

FeatureDescription
Auto-InstrumentationAutomatic capture of LLM calls
Span AttributesModel, provider, token usage, latency
Context PropagationRequest tracing across services
Error TrackingException and failure monitoring

Sources: src/client/src/components/(playground)/getting-started/tracing/index.tsx:1-100

Agent Schema Capture

The system captures tool schemas from agents for documentation and analysis:

interface ToolSchema {
  name: string;
  description?: string;
  schema: object;
}

Schemas are displayed in an expandable accordion format with JSON visualization.

Sources: src/client/src/components/(playground)/agents/tools-card.tsx:35-55

Model Configuration

OpenLIT supports custom model configurations with pricing information:

FieldTypeDescription
providerNamestringAI provider name
modelIdstringModel identifier
modelNamestringDisplay name
inputPricePerMTokennumberInput cost per million tokens
outputPricePerMTokennumberOutput cost per million tokens
contextWindownumberMaximum context length

Sources: src/client/src/components/(playground)/chat/message-input.tsx:25-45

Data Flow

sequenceDiagram
    participant App as Application
    participant SDK as OpenLIT SDK
    participant OTLP as OTLP Endpoint
    participant API as OpenLIT API
    participant CH as ClickHouse
    participant UI as Web Dashboard

    App->>SDK: Initialize with config
    App->>SDK: LLM API Call
    SDK->>SDK: Capture trace/metrics
    SDK->>OTLP: Export telemetry
    OTLP->>API: Process spans
    API->>CH: Store data
    UI->>API: Query traces
    API->>UI: Return results
    UI->>UI: Render dashboard

Configuration Reference

SDK Configuration Options

ParameterTypeDefaultDescription
otlp_endpointstringhttp://127.0.0.1:4318OTLP collector endpoint
service_namestringauto-detectService identifier
api_keystringnoneAuthentication for hosted services

Environment Variables

VariableSDK SupportDescription
OTEL_EXPORTER_OTLP_ENDPOINTPython, TSGlobal OTLP endpoint override
OPENLIT_API_KEYAllAPI authentication key
OPENLIT_SERVICE_NAMEAllOverride service name

Security Considerations

Authentication

The system supports multiple authentication providers:

  • Email/Password - Local authentication with default credentials
  • OAuth Providers - Google and GitHub SSO integration

Sources: src/client/src/components/(auth)/auth-form.tsx:1-50

API Security

API endpoints are protected and require valid session tokens. The controller service supports optional API key authentication:

-e OPENLIT_API_KEY="your-api-key"

Technology Stack

LayerTechnology
FrontendNext.js, React, TypeScript, TailwindCSS
SDKsPython, TypeScript
TelemetryOpenTelemetry Protocol (OTLP)
DatabaseClickHouse
ContainerizationDocker, Kubernetes
Service ManagementSystemd

External Resources

ResourceURL
Documentationhttps://docs.openlit.io
GitHub Repositoryhttps://github.com/openlit/openlit
TypeScript SDKhttps://github.com/openlit/openlit/tree/main/sdk/typescript
Python SDKhttps://github.com/openlit/openlit/tree/main/sdk/python

Sources: [src/client/src/app/(playground)/getting-started/page.tsx:73-74]()

Data Flow and Management

Related topics: System Architecture, Python SDK Architecture

Section Related Pages

Continue reading this section for the full explanation and source context.

Section Python SDK Tracing Architecture

Continue reading this section for the full explanation and source context.

Section Span Lifecycle

Continue reading this section for the full explanation and source context.

Section Instrumentation Framework Integration

Continue reading this section for the full explanation and source context.

Related topics: System Architecture, Python SDK Architecture

Data Flow and Management

Overview

OpenLIT is an OpenTelemetry-native observability platform designed for GenAI and LLM applications. The data flow architecture encompasses the entire lifecycle of telemetry data—from instrumentation at the application level through processing, storage, and visualization in the frontend UI.

The system follows a standard OpenTelemetry Collector pattern with platform-specific optimizations for handling GenAI-specific semantic conventions and metrics. Data flows through multiple layers: SDK instrumentation, OTLP export, backend processing, ClickHouse storage, and client-side data management for the playground UI.

Architecture Overview

graph TD
    subgraph Application_Layer["Application Layer"]
        PySDK["Python SDK"]
        TsSDK["TypeScript SDK"]
    end
    
    subgraph Instrumentation["Instrumentation"]
        LangGraph["LangGraph"]
        ClaudeAgent["Claude Agent SDK"]
        LlamaIndex["LlamaIndex"]
        OpenAI["OpenAI"]
    end
    
    subgraph Export["OTLP Export"]
        OTLP["OTLP Endpoint<br/>:4318"]
    end
    
    subgraph Backend["OpenLIT Backend"]
        Processor["Data Processor"]
        Storage["ClickHouse"]
    end
    
    subgraph Frontend["Frontend Client"]
        Client["Playground UI"]
        APIClient["API Client"]
    end
    
    PySDK -->|HTTP/gRPC| OTLP
    TsSDK -->|HTTP/gRPC| OTLP
    LangGraph --> PySDK
    ClaudeAgent --> PySDK
    OpenAI --> PySDK
    LlamaIndex --> TsSDK
    OTLP --> Processor
    Processor --> Storage
    Storage --> APIClient
    APIClient --> Client

Tracing Data Flow

Python SDK Tracing Architecture

The Python SDK provides comprehensive tracing capabilities through the OpenTelemetry SDK integration. The tracing module (tracing.py) establishes the foundation for all trace collection and export operations.

Core Tracing Components:

ComponentPurposeLocation
TracerProviderManages trace creation and propagationsdk/python/src/openlit/otel/tracing.py
SpanProcessorProcesses individual spans before exportsdk/python/src/openlit/otel/tracing.py
OTLPExporterExports spans to OTLP endpointsdk/python/src/openlit/otel/tracing.py
ContextPropagationMaintains trace context across async operationssdk/python/src/openlit/otel/tracing.py

The tracing initialization follows a standard pattern:

import openlit

openlit.init(otlp_endpoint="http://127.0.0.1:4318")

This initialization configures the tracer provider with the specified OTLP endpoint, enabling automatic span collection from all instrumented LLM frameworks.

Sources: sdk/python/src/openlit/otel/tracing.py

Span Lifecycle

Spans are created and managed through a structured lifecycle that ensures complete telemetry capture:

sequenceDiagram
    participant App as Application Code
    participant SDK as OpenLIT SDK
    participant Inst as Instrumentation
    participant Exporter as OTLP Exporter
    participant Backend as OpenLIT Backend
    
    App->>Inst: LLM/Framework Call
    Inst->>SDK: Create Span
    SDK->>SDK: Set Attributes
    SDK->>SDK: Record Metrics
    App->>SDK: Response Received
    SDK->>SDK: Complete Span
    SDK->>Exporter: Export Span
    Exporter->>Backend: OTLP Stream

The span lifecycle includes:

  1. Creation: Span is initialized with parent context
  2. Attribute Setting: GenAI-specific attributes (model, tokens, cost) are attached
  3. Timing: Start and end times are recorded for duration calculation
  4. Status: Span status is set based on success/failure
  5. Export: Spans are batched and exported to OTLP endpoint

Sources: sdk/python/src/openlit/instrumentation/langgraph/__init__.py

Instrumentation Framework Integration

OpenLIT provides instrumentation for multiple LLM frameworks, each with framework-specific span attributes:

Supported Instrumentations:

FrameworkOperations TracedSemantic Convention
OpenAIchat completions, embeddingsgen_ai.operation.type
LangGraphexecution, checkpointing, constructionframework + gen_ai
Claude Agent SDKinvoke_agent, execute_toolgen_ai.operation.type
LlamaIndexquery_engine, retriever, documentretrieve + framework

LangGraph Instrumentation Pattern:

The LangGraph instrumentation wraps execution operations with both sync and async variants:

# From langgraph/__init__.py
def _wrap_execution_operations(self, operations, ...):
    for module, method, operation_type, sync_type in operations:
        if sync_type == "async":
            wrapper = async_general_wrap(operation_type, ...)
        else:
            wrapper = general_wrap(operation_type, ...)

This pattern ensures consistent telemetry regardless of whether the underlying framework uses synchronous or asynchronous execution models.

Sources: sdk/python/src/openlit/instrumentation/langgraph/__init__.py

Metrics Data Flow

Metrics Collection Architecture

The metrics module handles quantitative measurements that complement trace data. Metrics provide aggregated views of system performance, cost, and usage patterns.

Metrics Data Points:

Metric TypeDescriptionAggregation
Request CountTotal number of LLM requestsCount
Token UsageInput/output tokens consumedSum
CostCalculated cost based on pricingSum
LatencyRequest duration in millisecondsHistogram
Error RateFailed requests percentageRatio

Sources: sdk/python/src/openlit/otel/metrics.py

Metric Recording Flow

Metrics are recorded during span processing using the OpenTelemetry Metrics API:

graph LR
    A[LLM Request] --> B[Create Span]
    B --> C[Extract Request Data]
    C --> D[Calculate Pricing]
    D --> E[Record Metrics]
    E --> F[Complete Span]
    
    G[Pricing Info] --> D
    H[Model Config] --> D

The metric recording includes:

  • start_time and end_time for duration calculation
  • request_model for token and pricing lookup
  • environment and application_name for filtering
  • pricing_info dictionary for cost calculation

Sources: sdk/python/src/openlit/instrumentation/openai/async_openai.py

Client-Side Data Management

Frontend API Client Architecture

The frontend client manages data fetching and state management for the playground UI. The API client layer provides a typed interface to the backend services.

API Client Structure:

// Simplified from request/index.ts
export class RequestClient {
  async fetchTraces(params: TraceParams): Promise<Trace[]>;
  async fetchMetrics(params: MetricParams): Promise<Metrics>;
  async fetchSpans(traceId: string): Promise<Span[]>;
}

Key Data Operations:

OperationEndpointPurpose
Fetch Traces/api/tracesList traces with filtering
Fetch Spans/api/traces/:id/spansGet detailed span data
Fetch Metrics/api/metricsAggregated metrics data
Export Data/api/openground/models/exportExport pricing data

Sources: src/client/src/lib/platform/request/index.ts

ClickHouse Data Access

The client uses ClickHouse as the primary data store and accesses it through helper functions that construct and execute queries.

Query Helper Functions:

FunctionPurpose
buildTraceQuery()Construct trace listing query
buildSpanQuery()Construct span detail query
applyFilters()Apply time range and attribute filters
parseResponse()Parse ClickHouse response format

Sources: src/client/src/lib/platform/clickhouse/helpers.ts

State Management Pattern

The frontend uses React Query or similar state management for data fetching:

graph TD
    A[Component Mount] --> B[Trigger Query]
    B --> C[Show Loading State]
    C --> D{Request Complete?}
    D -->|Yes| E[Update Cache]
    E --> F[Render Data]
    D -->|No| G[Show Error]
    G --> H[Retry Option]

The state management includes:

  • Loading states: Visual feedback during data fetch
  • Error handling: Graceful degradation on failures
  • Cache invalidation: Automatic refresh on mutations
  • Pagination: Support for large result sets with "Load More" patterns

Sources: src/client/src/components/(playground)/agents/version-drawer.tsx/agents/version-drawer.tsx)

Timeline View Data Structure

Span Timeline Rendering

The timeline view component renders trace data as a visual timeline, parsing span data into a hierarchical structure.

Span Data Model:

interface SpanData {
  spanId: string;
  parentSpanId?: string;
  startTime: number;
  endTime: number;
  name: string;
  kind: 'client' | 'server' | 'producer' | 'consumer';
  status: 'ok' | 'error';
  attributes: Record<string, any>;
  duration: number;
  cost?: number;
}

Timeline Calculation:

ColumnWidthContent
Name Column30%Span name and kind indicator
Timeline Column60%Visual timeline bar
Stats Column10%Duration and cost

The timeline calculates relative positions using traceWindowMs to determine the overall trace window, then positions each span proportionally within that window.

Sources: src/client/src/components/(playground)/request/components/timeline-view.tsx/request/components/timeline-view.tsx)

TypeScript SDK Data Flow

LlamaIndex Instrumentation

The TypeScript SDK provides similar capabilities for JavaScript/TypeScript applications, particularly for LlamaIndex integration.

LlamaIndex Traced Operations:

OperationSemantic ConventionDescription
document_loadretrieveDocument loading operations
document_splitframeworkText splitting/splitting
retriever_retrieveretrieveRetrieval operations
query_engine_queryretrieveQuery execution
response_synthesizechatResponse generation

Sources: sdk/typescript/src/instrumentation/llamaindex/index.ts

TypeScript Initialization Pattern

import openlit from 'openlit';

// Initialize with OTLP endpoint
openlit.init({
  otlpEndpoint: "http://127.0.0.1:4318"
});

// Or use environment variable
// OTEL_EXPORTER_OTLP_ENDPOINT="http://127.0.0.1:4318"

Environment Configuration

Data Flow Configuration Options

Environment VariableDefaultPurpose
OTEL_EXPORTER_OTLP_ENDPOINThttp://127.0.0.1:4318OTLP gRPC endpoint
OTEL_EXPORTER_OTLP_PROTOCOLgrpcProtocol (grpc/http/proto)
OTEL_SERVICE_NAMEdefaultService identification
OTEL_EXPORTER_OTLP_HEADERS-Authentication headers

Sources: src/client/src/app/(playground)/getting-started/page.tsx/getting-started/page.tsx)

Data Management Best Practices

Efficient Data Handling

  1. Batching: Spans are batched before export to reduce network overhead
  2. Sampling: Configure appropriate sampling rates for high-volume applications
  3. Filtering: Apply attribute filters at the query layer to reduce data transfer
  4. Pagination: Use paginated queries for large result sets

Error Handling Flow

graph TD
    A[Span Error] --> B[Record Exception]
    B --> C[Set Span Status ERROR]
    C --> D[Record Error Metrics]
    D --> E[Export Span]
    E --> F{Backend Available?}
    F -->|Yes| G[Store Data]
    F -->|No| H[Retry Queue]
    H -->|Retry| G

The error handling ensures that even when backend connectivity fails, error information is preserved for debugging.

Summary

The data flow in OpenLIT follows a well-structured pipeline from SDK instrumentation through to frontend visualization. Key aspects include:

  • Unified Telemetry: Both traces and metrics are collected through OpenTelemetry SDKs
  • Framework Integration: Multiple LLM frameworks are automatically instrumented
  • Efficient Export: OTLP protocol ensures standardized data transfer
  • Flexible Storage: ClickHouse provides scalable storage and querying
  • Responsive UI: The playground client efficiently fetches and displays telemetry data

This architecture enables comprehensive observability for GenAI applications while maintaining performance and scalability through batching, caching, and pagination strategies.

Source: https://github.com/openlit/openlit / Human Manual

Python SDK Architecture

Related topics: TypeScript SDK Architecture, Go SDK Architecture, LLM and Framework Integrations

Section Related Pages

Continue reading this section for the full explanation and source context.

Section 组件说明

Continue reading this section for the full explanation and source context.

Section Python SDK 初始化

Continue reading this section for the full explanation and source context.

Section 配置参数

Continue reading this section for the full explanation and source context.

Related topics: TypeScript SDK Architecture, Go SDK Architecture, LLM and Framework Integrations

Python SDK Architecture

概述

OpenLIT Python SDK 是一个 OpenTelemetry 原生的 GenAI 和 LLM 应用可观测性工具。该 SDK 通过自动插桩框架集成到各种 AI 应用中,自动捕获 OpenTelemetry traces 和 metrics,无需手动埋点。

核心职责包括:

  • 自动插桩主流 AI SDK(OpenAI、Anthropic、LangChain、CrewAI 等)
  • 遵循 OTel GenAI 语义约定(Semantic Conventions)
  • 提供基于 OpenTelemetry 的 tracing 和 metrics 收集
  • 实现生产级 guardrails(内容安全、审计)

Sources: sdk/python/src/openlit/instrumentation/claude_agent_sdk/__init__.py:1-15

核心架构组件

graph TD
    subgraph "OpenLIT Python SDK"
        A["openlit.init()"]
        B["Instrumentors<br/>BaseInstrumentor"]
        C["Guard System"]
        D["OTel Layer"]
    end
    
    subgraph "Instrumented Frameworks"
        E["OpenAI"]
        F["Anthropic"]
        G["Claude Agent SDK"]
        H["LangChain / CrewAI"]
        I["Google ADK"]
        J["Agent Framework"]
    end
    
    subgraph "OpenTelemetry Backend"
        K["OTLP Exporter"]
        L["Traces"]
        M["Metrics"]
    end
    
    A --> B
    A --> C
    B --> D
    C --> D
    D --> K
    K --> L
    K --> M
    
    B --> E
    B --> F
    B --> G
    B --> H
    B --> I
    B --> J

组件说明

组件位置职责
Instrumentorsopenlit.instrumentation.*各 AI 框架的自动插桩实现
Guard Systemopenlit.guard.*内容安全、审计和合规检查
OTel Layeropenlit.otel.*OpenTelemetry traces 和 metrics 的核心实现
Configopenlit._config全局配置管理和指标字典
Semcovopenlit.semcovGenAI 语义约定常量定义

初始化流程

Python SDK 初始化

import openlit

openlit.init(otlp_endpoint="http://127.0.0.1:4318")

初始化时 SDK 执行以下操作:

  1. 配置 OpenTelemetry tracer provider
  2. 加载全局配置(环境、应用名称、指标开关)
  3. 注入所有依赖的 instrumentors
  4. 初始化 guard pipeline(如配置)

Sources: sdk/python/src/openlit/instrumentation/claude_agent_sdk/__init__.py:30-42

配置参数

参数类型默认值说明
otlp_endpointstr"http://127.0.0.1:4318"OTLP gRPC endpoint
environmentstr"default"部署环境标识
application_namestr"default"应用名称
pricing_infodict{}模型定价信息
capture_message_contentboolFalse是否捕获消息内容
metricsdictNone指标配置字典
disable_metricsboolNone禁用指标收集
guardslistNoneGuard 配置列表

插桩系统架构

BaseInstrumentor 模式

所有框架插桩器继承自 BaseInstrumentor,采用统一模式:

class ClaudeAgentSDKInstrumentor(BaseInstrumentor):
    def instrumentation_dependencies(self) -> Collection[str]:
        return _instruments  # 如 ("claude-agent-sdk >= 0.1.0",)
    
    def _instrument(self, **kwargs):
        # 1. 获取 tracer 和配置
        tracer = trace.get_tracer(__name__)
        
        # 2. 使用 wrapt 包装目标函数
        wrap_function_wrapper(
            "module.path",
            "function_name",
            wrap_query
        )

Sources: sdk/python/src/openlit/instrumentation/claude_agent_sdk/__init__.py:27-45

插桩覆盖范围

框架支持版本追踪操作
Claude Agent SDK>= 0.1.0invoke_agent, execute_tool
Google ADK-execute_tool
Agent Framework-agent_init, agent_run, tool_execute, workflow_run
CrewAI-Agent 和 Tool 调用
LangGraph-Graph 节点执行

Span 命名规范

遵循 OTel GenAI 语义约定生成规范化的 span 名称:

操作类型Span 名称格式示例
Agent 创建create_agent {name}create_agent my_agent
Agent 调用invoke_agent {name}invoke_agent my_agent
Tool 执行execute_tool {name}execute_tool calculator
Workflowinvoke_workflow {name}invoke_workflow pipeline

Sources: sdk/python/src/openlit/instrumentation/agent_framework/utils.py:1-60

语义约定属性

所有 span 遵循 gen_ai.* 语义约定:

属性键说明示例值
gen_ai.operation.name操作类型invoke_agent, execute_tool
gen_ai.operation.type操作分类agent, tool
gen_ai.systemAI 系统openai, anthropic, google.adk
gen_ai.provider.name提供商名称google
gen_ai.tool.name工具名称calculator
gen_ai.tool.type工具类型function
gen_ai.tool.description工具描述Truncated 描述文本
gen_ai.tool.call.arguments工具调用参数JSON 字符串

Sources: sdk/python/src/openlit/instrumentation/google_adk/utils.py:1-50

Guard 系统

OpenLIT 提供生产级 guardrails 用于 LLM 应用安全:

import openlit

openlit.init(guards=[openlit.PII(action="redact")])

可用 Guard 类型

Guard 类位置功能
PIIopenlit.guard.pii个人身份信息检测和脱敏
PromptInjectionopenlit.guard.prompt_injection提示注入攻击检测
SensitiveTopicopenlit.guard.sensitive_topic敏感话题检测
TopicRestrictionopenlit.guard.topic_restriction话题限制
Moderationopenlit.guard.moderation内容审核
Schemaopenlit.guard.schema输出结构验证
Customopenlit.guard.custom自定义 guard 逻辑

Guard 核心类型

from openlit.guard import (
    Guard,
    GuardAction,
    GuardConfigError,
    GuardDeniedError,
    GuardPhase,
    GuardResult,
    GuardTimeoutError,
    PipelineResult,
)
类型说明
GuardBase guard 类
GuardActionGuard 执行动作
GuardPhase执行阶段(pre/post)
GuardResultGuard 执行结果
PipelineResultPipeline 聚合结果

Sources: sdk/python/src/openlit/guard/__init__.py:1-60

Pipeline 机制

Guard 使用 Pipeline 模式按序执行多个 guard:

from openlit.guard import Pipeline

pipeline = Pipeline([
    PII(action="redact"),
    PromptInjection(threshold=0.8),
    Moderation()
])

Claude Agent SDK 插桩详解

架构设计

sequenceDiagram
    participant User as User Code
    participant SDK as Claude Agent SDK
    participant Wrap as wrap_query
    participant Hook as _ToolSpanTracker
    participant Span as OTel Span
    
    User->>SDK: query(...)
    SDK->>Wrap: invoke wrapper
    Wrap->>Span: create invoke_agent span
    Wrap->>SDK: proceed with query
    SDK->>Hook: PreToolUse event
    Hook->>Span: create execute_tool span
    SDK->>Hook: PostToolUse event
    Hook->>Span: finalize tool span
    SDK-->>Wrap: response
    Wrap->>Span: finalize agent span
    Wrap-->>User: return response

Tool Span 追踪

使用 _ToolSpanTracker 管理 in-flight tool spans:

class _ToolSpanTracker:
    """Manages in-flight tool spans created by SDK hooks."""
    
    def __init__(
        self,
        tracer,
        parent_span,
        version,
        environment,
        application_name,
        capture_message_content
    ):
        # 初始化追踪器

Fallback 机制

当 SDK hooks 无法注入时,使用消息流回退方案:

# 检查 hooks 是否已注入
if hasattr(client, _HOOKS_INJECTED_ATTR):
    # 使用 hooks 追踪
else:
    # 使用消息流追踪

Sources: sdk/python/src/openlit/instrumentation/claude_agent_sdk/claude_agent_sdk.py:1-80

OpenTelemetry 集成

Tracing 实现

SDK 使用 OpenTelemetry Python API 创建 spans:

from opentelemetry import trace as trace_api
from opentelemetry.trace import SpanKind, Status, StatusCode

tracer = trace.get_tracer(__name__)

with tracer.start_as_current_span(
    name="invoke_agent",
    kind=SpanKind.CLIENT
) as span:
    span.set_attribute(...)
    # 执行操作

Metrics 实现

支持以下指标类型:

指标类型指标名称说明
Countergen_ai.*.token_usageToken 使用计数
Histogramgen_ai.*.duration请求耗时分布
Gauge-当前活跃请求数

语义约定常量

所有语义约定常量集中定义在 openlit.semcov 模块:

class SemanticConvention:
    GEN_AI_OPERATION = "gen_ai.operation.name"
    GEN_AI_SYSTEM = "gen_ai.system"
    GEN_AI_TOOL_NAME = "gen_ai.tool.name"
    GEN_AI_TOOL_TYPE = "gen_ai.tool.type"
    GEN_AI_SYSTEM_VALUE = "gen_ai.system.openai"

错误处理

Exception 传播

SDK 使用统一的异常处理机制:

from openlit.__helpers import handle_exception

def some_wrapper(func, *args, **kwargs):
    try:
        return func(*args, **kwargs)
    except Exception as e:
        handle_exception(span, e)
        raise

Guard 特定错误

错误类型说明
GuardError基础 guard 错误
GuardDeniedErrorGuard 拒绝请求
GuardTimeoutErrorGuard 执行超时
GuardConfigErrorGuard 配置错误

使用示例

基础集成

from openai import OpenAI
import openlit

openlit.init(otlp_endpoint="http://127.0.0.1:4318")

client = OpenAI(api_key="YOUR_OPENAI_KEY")

chat_completion = client.chat.completions.create(
    messages=[{"role": "user", "content": "Hello!"}],
    model="gpt-3.5-turbo"
)

带 Guard 的集成

import openlit
from openlit.guard import PII, PromptInjection

openlit.init(
    otlp_endpoint="http://127.0.0.1:4318",
    guards=[
        PII(action="redact"),
        PromptInjection(threshold=0.7)
    ]
)

环境变量配置

export OTEL_EXPORTER_OTLP_ENDPOINT="http://127.0.0.1:4318"
import openlit

openlit.init()  # 自动读取环境变量

扩展开发

自定义 Instrumentor

from opentelemetry.instrumentation.instrumentor import BaseInstrumentor
from wrapt import wrap_function_wrapper

class CustomSDKInstrumentor(BaseInstrumentor):
    def instrumentation_dependencies(self):
        return ("custom-sdk >= 1.0.0",)
    
    def _instrument(self, **kwargs):
        tracer = kwargs.get("tracer")
        wrap_function_wrapper(
            "custom_sdk",
            "Client.query",
            wrap_custom_query
        )

自定义 Guard

from openlit.guard import Guard, GuardAction, GuardResult

class CustomGuard(Guard):
    def _evaluate(self, text: str) -> GuardResult:
        # 自定义检测逻辑
        if "forbidden" in text.lower():
            return GuardResult(
                action=GuardAction.DENY,
                reason="Forbidden content detected"
            )
        return GuardResult(action=GuardAction.ALLOW)

Sources: [sdk/python/src/openlit/instrumentation/claude_agent_sdk/__init__.py:1-15]()

TypeScript SDK Architecture

Related topics: Python SDK Architecture, Go SDK Architecture, LLM and Framework Integrations

Section Related Pages

Continue reading this section for the full explanation and source context.

Section Entry Point Module

Continue reading this section for the full explanation and source context.

Section Configuration Module

Continue reading this section for the full explanation and source context.

Section Supported Integrations

Continue reading this section for the full explanation and source context.

Related topics: Python SDK Architecture, Go SDK Architecture, LLM and Framework Integrations

TypeScript SDK Architecture

Overview

The OpenLIT TypeScript SDK provides an OpenTelemetry-native observability solution for GenAI and LLM applications. It enables developers to instrument their TypeScript/JavaScript applications with automatic tracing and metrics collection, forwarding telemetry data to OpenLIT or any OTLP-compatible backend.

Key Characteristics:

AttributeValue
Package Nameopenlit
Installationnpm install openlit
Entry Pointsdk/typescript/src/index.ts
Primary DependencyOpenTelemetry SDK
Transport ProtocolOTLP (OpenTelemetry Protocol)

Sources: sdk/typescript/package.json

Core Architecture

The SDK follows a modular architecture with clear separation of concerns:

graph TD
    A[Application Code] --> B[openlit.init]
    B --> C[Config Module]
    C --> D[Instrumentation Module]
    D --> E[Guard Module]
    E --> F[OTLP Exporter]
    F --> G[OpenLIT Backend / OTEL Collector]
    
    C --> C1[OTLP Endpoint]
    C --> C2[Custom Attributes]
    C --> C3[Service Name]
    
    D --> D1[LLM Instrumentation]
    D --> D2[Vector DB Instrumentation]
    D --> D3[Framework Hooks]

Entry Point Module

The main entry point (index.ts) exposes a simple initialization API:

import openlit from 'openlit';

openlit.init({
  otlpEndpoint: "http://127.0.0.1:4318"
});

Sources: sdk/typescript/src/index.ts

Configuration Module

The config module (config.ts) handles SDK configuration including:

ParameterTypeDefaultDescription
otlpEndpointstringEnvironment variable OTEL_EXPORTER_OTLP_ENDPOINTOTLP-compatible endpoint URL
serviceNamestringApplication-definedName of the instrumented service
resourceAttributesRecord<string, string>{}Custom resource attributes

Sources: sdk/typescript/src/config.ts

Instrumentation Subsystem

The instrumentation module (instrumentation/index.ts) provides automatic observability for AI workloads:

Supported Integrations

CategoryInstrumented Components
LLM ProvidersOpenAI, Anthropic, Azure OpenAI, Google AI, AWS Bedrock, Cohere, Ollama
Vector DatabasesChromaDB, Pinecone, Weaviate, Qdrant, Milvus, PGVector
FrameworksLangChain, LlamaIndex, LangFlow, AutoGen

Sources: sdk/typescript/src/instrumentation/index.ts

Tracing Capabilities

The SDK automatically captures:

  • LLM Request/Response traces with prompt and completion data
  • Token usage metrics (prompt tokens, completion tokens, total tokens)
  • Latency measurements for API calls
  • Embeddings generation traces with vector dimensions
  • Tool/function calling traces with parameters and results

Guard Module

The guard module (guard/index.ts) provides safety and compliance features:

import { openlit } from 'openlit';

// Initialize with guardrails
openlit.init({
  otlpEndpoint: "http://127.0.0.1:4318"
});

Guard capabilities include:

  • Input/output validation for LLM interactions
  • Content filtering hooks
  • Rate limiting enforcement
  • Custom rule application

Sources: sdk/typescript/src/guard/index.ts

Initialization Flow

sequenceDiagram
    participant App as Application
    participant SDK as OpenLIT SDK
    participant Config as Config Module
    participant Inst as Instrumentation
    participant OTEL as OTEL SDK
    
    App->>SDK: openlit.init(options)
    SDK->>Config: Validate & merge config
    Config->>Config: Check env vars
    Config-->>SDK: Resolved config
    SDK->>OTEL: Initialize OTEL SDK
    SDK->>Inst: Register instrumentations
    Inst->>OTEL: Add span processors
    OTEL-->>SDK: Ready
    SDK-->>App: Initialization complete

Environment Variable Support

The SDK supports configuration via environment variables as an alternative to programmatic configuration:

Environment VariableDescription
OTEL_EXPORTER_OTLP_ENDPOINTOTLP endpoint URL
OTEL_SERVICE_NAMEService name for traces

Sources: src/client/src/components/(playground)/getting-started/tracing/index.tsx:42/getting-started/tracing/index.tsx)

Usage Patterns

Basic Initialization

import openlit from 'openlit';

openlit.init({
  otlpEndpoint: "http://127.0.0.1:4318"
});

OpenAI Integration Example

import OpenAI from 'openai';
import openlit from 'openlit';

openlit.init({ otlpEndpoint: "http://127.0.0.1:4318" });

const client = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY
});

const chatCompletion = await client.chat.completions.create({
  messages: [{ role: 'user', content: 'What is LLM Observability?' }],
  model: 'gpt-3.5-turbo',
});

Sources: src/client/src/components/(playground)/getting-started/tracing/index.tsx:28-39/getting-started/tracing/index.tsx)

Package Dependencies

Key dependencies in package.json:

{
  "dependencies": {
    "@opentelemetry/sdk-node": "^0.50.0",
    "@opentelemetry/exporter-trace-otlp-http": "^0.50.0",
    "@opentelemetry/resources": "^1.22.0",
    "@opentelemetry/semantic-conventions": "^1.22.0"
  }
}

Sources: sdk/typescript/package.json

Design Principles

  1. Zero-Configuration Defaults: The SDK works out-of-the-box with sensible defaults
  2. OpenTelemetry Native: Built on OTEL SDK for vendor-agnostic telemetry export
  3. Automatic Instrumentation: No code changes required for supported libraries
  4. Environment Variable Fallback: Configuration can be entirely environment-based
  5. Minimal Footprint: Instrumentation adds minimal latency overhead

Summary

The OpenLIT TypeScript SDK architecture provides a developer-friendly interface for adding observability to GenAI applications. By abstracting OpenTelemetry complexity and providing automatic instrumentation for popular LLM providers and vector databases, it enables comprehensive telemetry collection with minimal configuration. The SDK exports all data via OTLP, ensuring compatibility with OpenLIT's backend as well as any other OTEL-compatible observability platform.

Sources: [sdk/typescript/package.json](https://github.com/openlit/openlit/blob/main/sdk/typescript/package.json)

Go SDK Architecture

Related topics: Python SDK Architecture, TypeScript SDK Architecture

Section Related Pages

Continue reading this section for the full explanation and source context.

Section Custom Pricing Configuration

Continue reading this section for the full explanation and source context.

Section Custom Headers for OTLP Exports

Continue reading this section for the full explanation and source context.

Section OpenAI Instrumentation

Continue reading this section for the full explanation and source context.

Related topics: Python SDK Architecture, TypeScript SDK Architecture

Go SDK Architecture

Overview

The OpenLIT Go SDK is a lightweight instrumentation library that enables observability for GenAI applications built with Go. It provides automatic tracing and metrics collection for LLM calls, supporting OpenAI and Anthropic providers out of the box. The SDK follows OpenTelemetry-native principles, allowing seamless integration with the OpenLIT observability platform.

Core Components

The Go SDK is organized into several key packages:

ComponentPurpose
openlitCore initialization, configuration, and shutdown
openlit.ConfigCentral configuration struct for SDK settings
openlit.EvaluateRule()Standalone rule engine evaluation function
instrumentation/openaiOpenAI client instrumentation
instrumentation/anthropicAnthropic client instrumentation

Initialization Flow

The SDK must be initialized before instrumenting any LLM clients. The initialization process configures the OTLP endpoint and establishes the connection to the OpenLIT backend.

err := openlit.Init(openlit.Config{
    OtlpEndpoint:    "http://127.0.0.1:4318",
    Environment:     "production",
    ApplicationName: "my-go-app",
})
if err != nil {
    log.Fatalf("Failed to initialize OpenLIT: %v", err)
}
defer openlit.Shutdown(context.Background())

Sources: sdk/go/README.md

Configuration Options

The openlit.Config struct provides the following configuration parameters:

ParameterTypeDescription
OtlpEndpointstringOTLP collector endpoint (default: http://127.0.0.1:4318)
EnvironmentstringDeployment environment name
ApplicationNamestringApplication identifier for grouping traces
PricingInfomap[string]ModelPricingCustom pricing configuration per model
OtlpHeadersmap[string]stringCustom headers for OTLP exports

Custom Pricing Configuration

The SDK supports custom pricing information for models that require non-default cost calculations:

config := openlit.Config{
    PricingInfo: map[string]openlit.ModelPricing{
        "gpt-4-custom": {
            InputCostPerToken:  0.00003,
            OutputCostPerToken: 0.00006,
        },
    },
}

Sources: sdk/go/README.md

Custom Headers for OTLP Exports

Authentication and custom headers can be added to OTLP exports:

config := openlit.Config{
    OtlpHeaders: map[string]string{
        "Authorization": "Bearer token",
        "X-Custom-Header": "value",
    },
}

Sources: sdk/go/README.md

Instrumentation Architecture

The SDK uses a decorator/wrapper pattern for instrumenting LLM clients. This approach allows automatic tracing without modifying the original client interface.

graph TD
    A[User Application] --> B[Instrumented Client]
    B --> C[Original SDK Client]
    B --> D[OpenLIT Tracer]
    D --> E[OTLP Exporter]
    E --> F[OpenLIT Backend]
    C --> G[LLM Provider API]
    G --> C

OpenAI Instrumentation

The OpenAI instrumentation wraps the sashabaranov/go-openai client:

import (
    "github.com/openlit/openlit/sdk/go/instrumentation/openai"
    openai_sdk "github.com/sashabaranov/go-openai"
)

// Create and instrument OpenAI client
client := openai_sdk.NewClient("your-api-key")
instrumentedClient := openai.Instrument(client)

// Use as normal - automatically traced!
resp, err := instrumentedClient.CreateChatCompletion(ctx, openai_sdk.ChatCompletionRequest{
    Model: openai_sdk.GPT4,
    Messages: []openai_sdk.ChatCompletionMessage{
        {
            Role:    openai_sdk.ChatMessageRoleUser,
            Content: "Hello!",
        },
    },
})

Sources: sdk/go/README.md

Anthropic Instrumentation

The Anthropic instrumentation follows the same pattern:

import (
    "github.com/openlit/openlit/sdk/go/instrumentation/anthropic"
)

// Create and instrument Anthropic client
client := anthropic.NewClient("your-api-key")
instrumentedClient := anthropic.Instrument(client)

Rule Engine Integration

The SDK provides a standalone rule evaluation function that does not require initialization:

// EvaluateRule does NOT require openlit.Init()
rules, err := openlit.EvaluateRule(ctx, &openlit.EvaluateRuleRequest{
    TraceAttributes: attributes,
})

This function evaluates trace attributes against the OpenLIT Rule Engine to retrieve matching rules and associated entities including contexts, prompts, and evaluation configurations.

Sources: sdk/go/README.md

Integration with OpenLIT Dashboard

The complete observability workflow involves:

``bash docker compose up -d ``

  1. Start OpenLIT Stack: Deploy using Docker Compose

``go openlit.Init(openlit.Config{ OtlpEndpoint: "http://localhost:4318", }) ``

  1. Configure SDK: Initialize the Go SDK with the OTLP endpoint
  1. View Traces: Access the dashboard at http://localhost:3000

Sources: sdk/go/README.md

Example Projects

The SDK includes complete working examples in the examples/ directory:

ExamplePath
OpenAI Chat Completionexamples/openai/chat/
OpenAI Streamingexamples/openai/streaming/
Anthropic Messagesexamples/anthropic/messages/
Anthropic Streamingexamples/anthropic/streaming/

Module Dependencies

The Go SDK depends on core OpenTelemetry packages for trace export and propagation:

  • OpenTelemetry OTLP exporter
  • OpenTelemetry trace propagation
  • Context propagation utilities

Sources: sdk/go/go.mod

Sources: [sdk/go/README.md](https://github.com/openlit/openlit/blob/main/sdk/go/README.md)

LLM and Framework Integrations

Related topics: Python SDK Architecture, TypeScript SDK Architecture

Section Related Pages

Continue reading this section for the full explanation and source context.

Section Supported Integrations

Continue reading this section for the full explanation and source context.

Section Instrumentation Pattern

Continue reading this section for the full explanation and source context.

Section Core Components

Continue reading this section for the full explanation and source context.

Related topics: Python SDK Architecture, TypeScript SDK Architecture

LLM and Framework Integrations

OpenLIT provides comprehensive instrumentation for a wide range of LLMs and AI frameworks, enabling automatic OpenTelemetry-native observability for GenAI applications. This page documents the architecture, supported integrations, and implementation patterns.

Overview

OpenLIT's instrumentation layer wraps SDK calls from various LLM providers and AI frameworks to automatically capture traces and metrics without requiring manual instrumentation code.

Supported Integrations

CategoryIntegrationPython SDKTypeScript SDKGo SDK
LLM ProvidersOpenAI
LLM ProvidersAnthropic
LLM ProvidersAzure OpenAI
LLM ProvidersVertex AI
LLM ProvidersMistral AI
LLM ProvidersCohere
LLM ProvidersHuggingFace
AI FrameworksLangChain-
AI FrameworksLlamaIndex--
AI FrameworksCrewAI--
AI FrameworksLangGraph--
AI FrameworksClaude Agent SDK--
Vector StoresPinecone--
Vector StoresChroma--
Vector StoresQdrant--
Vector StoresWeaviate--

Sources: sdk/python/README.md

Architecture

Instrumentation Pattern

All instrumentations follow a consistent pattern based on OpenTelemetry's BaseInstrumentor class:

graph TD
    A[Application Code] --> B[Instrumented SDK]
    B --> C[Wrapper Function]
    C --> D[OpenTelemetry Tracer]
    C --> E[Metrics Recorder]
    D --> F[OTLP Exporter]
    E --> F
    F --> G[OpenLIT Backend]

Core Components

ComponentPurposeLocation
BaseInstrumentorBase class for all instrumentorsopentelemetry.instrumentation.instrumentor
wrap_function_wrapperWraps SDK functions dynamicallywrapt library
OpenlitConfigSingleton configuration managementsdk/python/src/openlit/_config.py
Semantic ConventionsStandardized attribute namingopenlit.semcov module

Sources: sdk/python/src/openlit/instrumentation/claude_agent_sdk/__init__.py:17-21

Python SDK Instrumentation

Instrumentor Base Class

All Python SDK instrumentors extend BaseInstrumentor and implement two required methods:

class ClaudeAgentSDKInstrumentor(BaseInstrumentor):
    """OTel GenAI semantic convention compliant instrumentor for Claude Agent SDK."""

    def instrumentation_dependencies(self) -> Collection[str]:
        return _instruments  # e.g., ("claude-agent-sdk >= 0.1.0",)

    def _instrument(self, **kwargs):
        # Initialize tracer, config, and wrap functions

Sources: sdk/python/src/openlit/instrumentation/claude_agent_sdk/__init__.py:26-35

Initialization Parameters

When calling openlit.init(), the following parameters are passed to all instrumentors:

ParameterTypeDescriptionDefault
environmentstrDeployment environment name"default"
application_namestrApplication identifier"default"
pricing_infoDict[str, ModelPricing]Custom model pricing{}
capture_message_contentboolEnable/disable content tracingTrue
disable_metricsboolDisable metrics collectionNone
otlp_endpointstrOTLP exporter endpointConfigured endpoint

Sources: sdk/python/src/openlit/_config.py:20-35

OpenlitConfig Singleton

The OpenlitConfig class manages centralized configuration:

class OpenlitConfig:
    """Singleton configuration class for OpenLIT."""
    
    _instance = None
    
    # Class-level attributes
    environment = "default"
    application_name = "default"
    pricing_info = {}
    metrics_dict = {}
    otlp_endpoint = None
    otlp_headers = None
    disable_batch = False
    capture_message_content = True

Sources: sdk/python/src/openlit/_config.py:18-42

LlamaIndex Integration

Operation Type Mapping

The LlamaIndex instrumentation uses a semantic convention-based operation mapping system:

graph LR
    A[Document Operations] --> B[RETRIEVE]
    A --> C[FRAMEWORK]
    D[Index Operations] --> C
    E[Query Operations] --> B
    F[Retriever Operations] --> B

Supported Operations

OperationSemantic ConventionCategory
document_loadRETRIEVEDocument Loading
document_transformFRAMEWORKDocument Processing
document_splitFRAMEWORKDocument Processing
index_constructFRAMEWORKIndex Management
index_insertFRAMEWORKIndex Management
query_engine_queryRETRIEVEQuery Engine
retriever_retrieveRETRIEVERetrieval

Sources: sdk/python/src/openlit/instrumentation/llamaindex/utils.py:1-30

Helper Functions

Building Tool Definitions

The __helpers.py module provides utilities for extracting tool definitions from chat requests:

def build_tool_definitions(tools):
    """
    Extract tool/function definitions from a chat request's ``tools`` parameter.
    
    Supports both OpenAI-style schema and flat schema formats.
    """

Supported formats:

FormatStructure
OpenAI-style{"type": "function", "function": {...}}
Flat (dict){"name": ..., "description": ..., "parameters": ...}
Flat (object)Object with name, description, input_schema attributes

Sources: sdk/python/src/openlit/__helpers.py:1-40

System Instructions Builder

Extracts and formats system instructions from various input formats:

def build_system_instructions(instructions, **kwargs):
    """Builds system instructions from various input formats."""

Guardrails Integration

OpenLIT includes a production-grade guardrails system:

Available Guards

Guard ClassPurpose
PIIDetect and redact Personally Identifiable Information
PromptInjectionDetect prompt injection attacks
SensitiveTopicFilter sensitive topics
TopicRestrictionRestrict to allowed topics
ModerationContent moderation
SchemaOutput schema validation
CustomCustom guard implementation

Sources: sdk/python/src/openlit/guard/__init__.py:1-30

Guard Architecture

graph TD
    A[User Input] --> B[Pipeline]
    B --> C[Guard 1: PII]
    C --> D[Guard 2: PromptInjection]
    D --> E[Guard N: Custom]
    E --> F[GuardResult]
    C -.->|Denied| G[GuardDeniedError]
    D -.->|Timeout| H[GuardTimeoutError]

Usage Example

import openlit

# Initialize with guards
openlit.init(guards=[openlit.PII(action="redact")])

# Or with direct imports
from openlit import PII, PromptInjection, Moderation

guards = [PII(), PromptInjection(), Moderation()]
openlit.init(guards=guards)

TypeScript SDK Instrumentation

Wrapper Pattern

The TypeScript SDK uses a similar wrapping pattern:

// Wrapped in wrapper.ts for each integration
export function wrapOpenAI() {
  // Wrap OpenAI SDK methods
}

Sources: sdk/typescript/src/instrumentation/openai/wrapper.ts

Initialization

import openlit from 'openlit';

openlit.init({
  otlpEndpoint: "http://127.0.0.1:4318"
});

Configuration Reference

Environment Variables

VariableDescriptionExample
OTEL_EXPORTER_OTLP_ENDPOINTOTLP endpoint URLhttp://127.0.0.1:4318
OTEL_EXPORTER_OTLP_HEADERSAuthentication headersAuthorization=Bearer token

SDK Configuration Options

import openlit

openlit.init(
    otlp_endpoint="http://127.0.0.1:4318",
    otlp_headers={"Authorization": "Bearer token"},
    environment="production",
    application_name="my-llm-app",
    pricing_info={
        "gpt-4": {"input_cost_per_token": 0.00003, "output_cost_per_token": 0.00006}
    },
    capture_message_content=True
)

Best Practices

1. Instrument Before Usage

Always initialize OpenLIT before importing instrumented SDKs:

# Correct order
import openlit
openlit.init(otlp_endpoint="http://127.0.0.1:4318")

from openai import OpenAI  # Now automatically instrumented

2. Custom Pricing

Define custom pricing for accurate cost tracking:

openlit.init(
    pricing_info={
        "custom-model": {
            "input_cost_per_token": 0.00001,
            "output_cost_per_token": 0.00002
        }
    }
)

3. Selective Content Capture

Disable content capture for sensitive data:

openlit.init(
    capture_message_content=False  # Won't trace message content
)

See Also

Sources: [sdk/python/README.md](https://github.com/openlit/openlit/blob/main/sdk/python/README.md)

OpenLIT Controller

Related topics: GPU Collector

Section Related Pages

Continue reading this section for the full explanation and source context.

Section Core Components

Continue reading this section for the full explanation and source context.

Section Linux (systemd)

Continue reading this section for the full explanation and source context.

Section Docker

Continue reading this section for the full explanation and source context.

Related topics: GPU Collector

OpenLIT Controller

The OpenLIT Controller is a standalone, lightweight binary agent designed to automatically instrument Python-based LLM applications with OpenLIT's observability SDK. It operates as a background service that runs alongside your application, providing seamless OpenTelemetry-native tracing and metrics collection without requiring code modifications.

Overview

The Controller serves as an autonomous agent that:

  • Discovers Python applications running in various environments (bare metal, containers, Kubernetes)
  • Injects the OpenLIT Python SDK into target applications at runtime
  • Manages the lifecycle of instrumentation (enable, disable, status monitoring)
  • Reports service metadata back to the OpenLIT platform

Sources: src/client/src/lib/platform/controller/features/agent.ts:1-60

Architecture

graph TD
    A[OpenLIT Platform] -->|Manage & Monitor| B[OpenLIT Controller]
    B -->|Discover Services| C[Scanner Module]
    B -->|Instrument Apps| D[Engine Module]
    D -->|Python SDK Injection| E[Python Runtime]
    E -->|Traces & Metrics| F[OpenTelemetry Collector]
    
    G[Kubernetes Pod] -->|Contains| H[Python Application]
    H -->|Auto-instrumented by| D
    
    I[Linux Host] -->|Systemd Service| B

Core Components

ComponentLocationResponsibility
cmd/controllercmd/controller/main.goEntry point, configuration, signal handling
Serverinternal/server/handlers.goHTTP API for platform communication
Engineinternal/engine/engine.goOrchestrates instrumentation operations
Lifecycleinternal/engine/lifecycle.goManages enable/disable transitions
Python SDK Runtimeinternal/engine/python_sdk_runtime.goRuntime injection of Python SDK
Scannerinternal/scanner/scanner.goDiscovers Python applications

Sources: src/client/src/lib/platform/controller/features/agent.ts:1-25

Supported Environments

The Controller supports multiple deployment scenarios:

EnvironmentInstallation MethodStatus
Linux (systemd)Direct binary download + systemd service✅ Primary
DockerPrivileged container with PID host mode✅ Supported
KubernetesDaemonSet or sidecar pattern✅ Supported

Sources: src/client/src/app/(playground)/agents/no-controller.tsx:1-50

Installation

Linux (systemd)

Download the latest binary and configure as a systemd service:

curl -fsSL https://github.com/openlit/openlit/releases/latest/download/openlit-controller-linux-amd64 \
  -o /usr/local/bin/openlit-controller
chmod +x /usr/local/bin/openlit-controller

# Create systemd service
cat > /etc/systemd/system/openlit-controller.service << 'EOF'
[Unit]
Description=OpenLIT Controller
After=network.target

[Service]
Environment="OPENLIT_URL=${openlitUrl}"
Environment="OTEL_EXPORTER_OTLP_ENDPOINT=${openlitUrl.replace(/:\d+$/, ":4318")}"
Environment="OPENLIT_API_KEY=${apiKey}"
ExecStart=/usr/local/bin/openlit-controller
Restart=always

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable --now openlit-controller

Sources: src/client/src/app/(playground)/agents/no-controller.tsx:10-35

Docker

docker run -d --privileged --pid=host \
  -e OPENLIT_URL=http://openlit:3000 \
  -e OTEL_EXPORTER_OTLP_ENDPOINT=http://openlit:4318 \
  openlit-controller

Configuration

The Controller is configured via environment variables:

Environment VariableDescriptionRequired
OPENLIT_URLURL of the OpenLIT platformYes
OPENLIT_API_KEYAPI key for authenticationNo
OTEL_EXPORTER_OTLP_ENDPOINTOTLP endpoint for telemetryYes

Sources: src/client/src/app/(playground)/agents/no-controller.tsx:15-25

Agent Operations

The Controller exposes three primary operations:

Enable Instrumentation

Activates OpenLIT SDK injection for target Python applications.

{
  "operation": "enable",
  "serviceId": "string"
}

Disable Instrumentation

Deactivates SDK injection and removes runtime hooks.

{
  "operation": "disable",
  "serviceId": "string"
}

Status Check

Retrieves current instrumentation state for a service.

{
  "operation": "status",
  "serviceId": "string"
}

Sources: src/client/src/lib/platform/controller/features/agent.ts:25-45

Service State Model

stateDiagram-v2
    [*] --> disabled: Initial State
    disabled --> enabled: enable operation
    enabled --> disabled: disable operation
    enabled --> manual: explicit override
    manual --> enabled: resume auto
    disabled --> manual: partial config
    manual --> disabled: full removal

State Definitions

StateDescription
enabledSDK actively injecting traces
disabledNo instrumentation active
manualUser-controlled state (not auto-managed)
automatableService eligible for auto-instrumentation

Sources: src/client/src/lib/platform/controller/features/agent.ts:15-30

Python SDK Runtime Integration

The Controller's Python SDK Runtime module handles the actual SDK injection:

  1. Process Discovery: Identifies Python processes running user applications
  2. Runtime Injection: Injects OpenLIT SDK using Python's import hooks
  3. Configuration Propagation: Sets OTLP endpoint and API keys via environment
  4. Health Monitoring: Ensures instrumentation remains active

The runtime is specifically optimized for Python-only services:

supported: service.language_runtime === "python"

Sources: src/client/src/lib/platform/controller/features/agent.ts:20

Kubernetes Integration

When running in Kubernetes, the Controller respects workload metadata:

AttributeDescription
k8s.workload.kindWorkload type (Deployment, StatefulSet, etc.)
service.service_nameName of the service
service.namespaceKubernetes namespace

Naked Pod Handling

The Controller automatically detects and handles "naked pods" (pods without a workload controller):

const isNakedPod = mode === "kubernetes" && (!workloadKind || workloadKind === "Pod");

Sources: src/client/src/lib/platform/controller/features/agent.ts:8-12

Validation

Operations are validated before execution:

validatePayload(operation: string, _payload: Record<string, unknown>) {
    if (
        operation !== "enable" &&
        operation !== "disable" &&
        operation !== "status"
    ) {
        return `Unknown operation "${operation}" for feature "${FEATURE}". 
                Expected "enable", "disable", or "status".`;
    }
    return null;
}

Sources: src/client/src/lib/platform/controller/features/agent.ts:28-40

Summary

The OpenLIT Controller is a critical component for zero-code instrumentation of Python LLM applications. It provides:

  • Automated Discovery: Scans and identifies Python services automatically
  • Runtime Injection: Injects observability SDK without application restarts
  • Multi-Platform Support: Works on Linux, Docker, and Kubernetes
  • Platform Integration: Connects to OpenLIT platform for centralized management
  • Lifecycle Management: Full control over enable/disable operations

Sources: [src/client/src/lib/platform/controller/features/agent.ts:1-60]()

GPU Collector

Related topics: OpenLIT Controller, System Architecture

Section Related Pages

Continue reading this section for the full explanation and source context.

Related topics: OpenLIT Controller, System Architecture

GPU Collector

The OpenTelemetry GPU Collector (also referred to as opentelemetry-gpu-collector) is a specialized telemetry agent built and maintained by OpenLIT. It provides real-time GPU hardware telemetry collection for NVIDIA, AMD, and Intel GPUs, emitting metrics in compliance with the OpenTelemetry semantic conventions under the hw.gpu.* namespace.

Overview

The GPU Collector serves as a standalone service that monitors GPU hardware metrics and exports them via the OTLP protocol to any OpenTelemetry-compatible backend, including the OpenLIT observability platform.

Key Responsibilities:

  • Collect GPU hardware telemetry from NVIDIA GPUs via NVML (NVIDIA Management Library)
  • Collect GPU hardware telemetry from AMD and Intel GPUs via sysfs/hwmon interfaces
  • Perform eBPF-based CUDA kernel tracing for detailed operation insights
  • Emit metrics following OpenTelemetry semantic conventions (hw.gpu.*)
  • Export metrics over OTLP for integration with observability platforms

License: Apache-2.0

Sources: opentelemetry-gpu-collector/README.md

Sources: [opentelemetry-gpu-collector/README.md](https://github.com/openlit/openlit/blob/main/opentelemetry-gpu-collector/README.md)

Doramagic Pitfall Log

Source-linked risks stay visible on the manual page so the preview does not read like a recommendation.

medium Integration: Governance and compliance signals for LLM observability

First-time setup may fail or require extra isolation and rollback planning.

medium Proposal: gen_ai.agent.threat_detected span event helper for OTel-shaped detection observability

First-time setup may fail or require extra isolation and rollback planning.

medium [Bug]: Docker Image doesn't run on windows 64bit

First-time setup may fail or require extra isolation and rollback planning.

medium openlit-1.19.0

First-time setup may fail or require extra isolation and rollback planning.

Doramagic Pitfall Log

Doramagic extracted 15 source-linked risk signals. Review them before installing or handing real data to the project.

1. Installation risk: Integration: Governance and compliance signals for LLM observability

  • Severity: medium
  • Finding: Installation risk is backed by a source signal: Integration: Governance and compliance signals for LLM observability. Treat it as a review item until the current version is checked.
  • User impact: First-time setup may fail or require extra isolation and rollback planning.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: Source-linked evidence: https://github.com/openlit/openlit/issues/1106

2. Installation risk: Proposal: gen_ai.agent.threat_detected span event helper for OTel-shaped detection observability

  • Severity: medium
  • Finding: Installation risk is backed by a source signal: Proposal: gen_ai.agent.threat_detected span event helper for OTel-shaped detection observability. Treat it as a review item until the current version is checked.
  • User impact: First-time setup may fail or require extra isolation and rollback planning.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: Source-linked evidence: https://github.com/openlit/openlit/issues/1186

3. Installation risk: [Bug]: Docker Image doesn't run on windows 64bit

  • Severity: medium
  • Finding: Installation risk is backed by a source signal: [Bug]: Docker Image doesn't run on windows 64bit. Treat it as a review item until the current version is checked.
  • User impact: First-time setup may fail or require extra isolation and rollback planning.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: Source-linked evidence: https://github.com/openlit/openlit/issues/786

4. Installation risk: openlit-1.19.0

  • Severity: medium
  • Finding: Installation risk is backed by a source signal: openlit-1.19.0. Treat it as a review item until the current version is checked.
  • User impact: First-time setup may fail or require extra isolation and rollback planning.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: Source-linked evidence: https://github.com/openlit/openlit/releases/tag/openlit-1.19.0

5. Configuration risk: controller-0.2.0

  • Severity: medium
  • Finding: Configuration risk is backed by a source signal: controller-0.2.0. Treat it as a review item until the current version is checked.
  • User impact: Users may get misleading failures or incomplete behavior unless configuration is checked carefully.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: Source-linked evidence: https://github.com/openlit/openlit/releases/tag/controller-0.2.0

6. Configuration risk: openlit-1.20.0

  • Severity: medium
  • Finding: Configuration risk is backed by a source signal: openlit-1.20.0. Treat it as a review item until the current version is checked.
  • User impact: Users may get misleading failures or incomplete behavior unless configuration is checked carefully.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: Source-linked evidence: https://github.com/openlit/openlit/releases/tag/openlit-1.20.0

7. Capability assumption: README/documentation is current enough for a first validation pass.

  • Severity: medium
  • Finding: README/documentation is current enough for a first validation pass.
  • User impact: The project should not be treated as fully validated until this signal is reviewed.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: capability.assumptions | github_repo:747319327 | https://github.com/openlit/openlit | README/documentation is current enough for a first validation pass.

8. Maintenance risk: Maintainer activity is unknown

  • Severity: medium
  • Finding: Maintenance risk is backed by a source signal: Maintainer activity is unknown. Treat it as a review item until the current version is checked.
  • User impact: Users cannot judge support quality until recent activity, releases, and issue response are checked.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: evidence.maintainer_signals | github_repo:747319327 | https://github.com/openlit/openlit | last_activity_observed missing

9. Security or permission risk: no_demo

  • Severity: medium
  • Finding: no_demo
  • User impact: The project may affect permissions, credentials, data exposure, or host boundaries.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: downstream_validation.risk_items | github_repo:747319327 | https://github.com/openlit/openlit | no_demo; severity=medium

10. Security or permission risk: no_demo

  • Severity: medium
  • Finding: no_demo
  • User impact: The project may affect permissions, credentials, data exposure, or host boundaries.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: risks.scoring_risks | github_repo:747319327 | https://github.com/openlit/openlit | no_demo; severity=medium

11. Security or permission risk: Bug: OpenAI API key in operator example test-application is not using OPENAI_API_KEY env var

  • Severity: medium
  • Finding: Security or permission risk is backed by a source signal: Bug: OpenAI API key in operator example test-application is not using OPENAI_API_KEY env var. Treat it as a review item until the current version is checked.
  • User impact: The project may affect permissions, credentials, data exposure, or host boundaries.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: Source-linked evidence: https://github.com/openlit/openlit/issues/1135

12. Security or permission risk: openlit-1.19.1

  • Severity: medium
  • Finding: Security or permission risk is backed by a source signal: openlit-1.19.1. Treat it as a review item until the current version is checked.
  • User impact: The project may affect permissions, credentials, data exposure, or host boundaries.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: Source-linked evidence: https://github.com/openlit/openlit/releases/tag/openlit-1.19.1

Source: Doramagic discovery, validation, and Project Pack records

Community Discussion Evidence

These external discussion links are review inputs, not standalone proof that the project is production-ready.

Sources 11

Count of project-level external discussion links exposed on this manual page.

Use Review before install

Open the linked issues or discussions before treating the pack as ready for your environment.

Community Discussion Evidence

Doramagic exposes project-level community discussion separately from official documentation. Review these links before using openlit with real data or production workflows.

Source: Project Pack community evidence and pitfall evidence