Doramagic Project Pack · Human Manual
openlit
Related topics: Quick Start Guide, System Architecture
OpenLIT Overview
Related topics: Quick Start Guide, System Architecture
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Quick Start Guide, System Architecture
OpenLIT Overview
What is OpenLIT?
OpenLIT is an OpenTelemetry-native GenAI and LLM Application Observability tool designed to simplify the integration process for sending OpenTelemetry traces and metrics from your LLM applications. It provides comprehensive monitoring capabilities for both GenAI and LLM applications.
Sources: src/client/src/app/(playground)/getting-started/page.tsx:127
Key Features
OpenLIT offers several core capabilities for observability:
| Feature Category | Description |
|---|---|
| Tracing | Capture detailed traces of LLM application requests |
| Metrics | Collect and analyze performance metrics |
| Evaluations | Assess response quality and model performance |
| Context Management | Manage evaluation contexts and prompts |
| Secrets Management | Securely store and manage API keys and credentials |
Sources: src/client/src/components/(playground)/getting-started/tracing/index.tsx Sources: src/client/src/components/(playground)/getting-started/secrets/index.tsx Sources: src/client/src/components/(playground)/getting-started/prompts/index.tsx
Architecture Overview
graph TD
A[LLM Application] --> B[OpenLIT SDK]
B --> C[OTLP Endpoint<br/>127.0.0.1:4318]
C --> D[OpenLIT Backend]
D --> E[OpenLIT UI<br/>127.0.0.1:3000]
F[Database] <--> DSDK Support
OpenLIT provides official SDKs for multiple programming languages:
Python SDK
The Python SDK enables Python-based LLM applications to send telemetry data to OpenLIT.
import openlit
openlit.init()
Sources: src/client/src/app/(playground)/getting-started/page.tsx
TypeScript/JavaScript SDK
The TypeScript SDK provides similar functionality for Node.js and browser-based applications.
import openlit from 'openlit';
openlit.init({
otlpEndpoint: "http://127.0.0.1:4318"
});
Example Usage with OpenAI:
import OpenAI from 'openai';
import openlit from 'openlit';
openlit.init({ otlpEndpoint: "http://127.0.0.1:4318" });
const client = new OpenAI({
apiKey: process.env.OPENAI_API_KEY
});
const chatCompletion = await client.chat.completions.create({
messages: [{ role: 'user', content: 'What is LLM Observability?' }],
model: 'gpt-3.5-turbo',
});
Sources: src/client/src/components/(playground)/getting-started/tracing/index.tsx
Configuration Options
OTLP Endpoint Configuration
You can configure the OTLP endpoint in two ways:
| Method | Configuration |
|---|---|
| Code | openlit.init({ otlpEndpoint: "http://127.0.0.1:4318" }) |
| Environment Variable | OTEL_EXPORTER_OTLP_ENDPOINT="http://127.0.0.1:4318" |
Sources: src/client/src/app/(playground)/getting-started/page.tsx
Environment Variables
| Variable | Purpose | Default Value |
|---|---|---|
OTEL_EXPORTER_OTLP_ENDPOINT | OTLP collector endpoint | http://127.0.0.1:4318 |
Deployment
Docker Compose Deployment
OpenLIT can be deployed using Docker Compose from the root directory:
git clone [email protected]:openlit/openlit.git
docker compose up -d
Sources: src/client/src/app/(playground)/getting-started/page.tsx
Default Ports
| Service | Default Address |
|---|---|
| OpenLIT UI | http://127.0.0.1:3000 |
| OTLP Endpoint | http://127.0.0.1:4318 |
Default Credentials
After deployment, access the OpenLIT UI using the following default credentials:
| Field | Default Value |
|---|---|
| [email protected] | |
| Password | openlituser |
Sources: src/client/src/app/(playground)/getting-started/page.tsx
SDK Repository Locations
| SDK | Repository Path |
|---|---|
| Python SDK | sdk/python |
| TypeScript SDK | sdk/typescript |
Sources: src/client/src/app/(playground)/getting-started/page.tsx
Community and Support
OpenLIT maintains active community channels for support and discussions:
| Platform | Link |
|---|---|
| GitHub | https://github.com/openlit/openlit |
| Documentation | https://docs.openlit.io |
| Slack | Join via invitation link |
| X (Twitter) | @openlit_io |
Sources: src/client/README.md
Evaluation Features
OpenLIT supports custom evaluation types with configurable prompts and context:
// Evaluation prompt format example
[Domain Accuracy evaluation context]
Consider: whether the response aligns with domain-specific knowledge and terminology.
Look for incorrect use of domain terms, inaccurate domain-specific claims, and deviations from established domain practices.
Evaluations provide the following metrics:
- Score: Numerical rating
- Classification: Categorical classification
- Explanation: Detailed reasoning
- Verdict: Pass/fail determination
Sources: src/client/src/app/(playground)/evaluations/types/new/page.tsx Sources: src/client/src/components/(playground)/request/components/evaluations.tsx
Pricing Integration
OpenLIT can calculate costs for LLM usage based on token consumption:
cost = (input_tokens / 1M) × input_price + (output_tokens / 1M) × output_price
This includes:
- Input token pricing per million tokens
- Output token pricing per million tokens
- Context window size tracking
Sources: src/client/src/components/(playground)/chat/chat-settings-form.tsx
Sources: [src/client/src/app/(playground)/getting-started/page.tsx:127]()
Quick Start Guide
Related topics: Python SDK Architecture
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Python SDK Architecture
Quick Start Guide
OpenLIT is an OpenTelemetry-native GenAI and LLM Application Observability tool designed to simplify the integration of tracing and metrics collection for AI applications. This guide provides comprehensive instructions for deploying OpenLIT and instrumenting your applications using the Python and TypeScript SDKs.
Prerequisites
Before beginning, ensure you have the following installed:
| Requirement | Version | Purpose |
|---|---|---|
| Docker | Latest | Container runtime for OpenLIT deployment |
| Docker Compose | Latest | Orchestration tool |
| Node.js | 18+ | Required for TypeScript SDK |
| Python | 3.8+ | Required for Python SDK |
| npm/pip | Latest | Package managers |
Deployment Options
OpenLIT can be deployed using multiple methods depending on your infrastructure requirements.
Docker Compose Deployment
The recommended approach for local development and testing is Docker Compose.
git clone [email protected]:openlit/openlit.git
cd openlit
docker compose up -d
Once deployed, access the OpenLIT UI at http://127.0.0.1:3000 using the default credentials:
- Email: [email protected]
- Password: openlituser
Sources: src/client/src/app/(playground)/getting-started/page.tsx:50-55
Controller Deployment
For infrastructure-level observability, the OpenLIT Controller can be deployed as a system service or containerized application.
#### Linux System Service
sudo tee /etc/systemd/system/openlit-controller.service <<EOF
[Unit]
Description=OpenLIT Controller
After=network.target
[Service]
Type=simple
User=root
WorkingDirectory=/opt/openlit
ExecStart=/opt/openlit/openlit-controller
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable --now openlit-controller
Sources: src/client/src/app/(playground)/agents/no-controller.tsx:12-25
#### Docker Deployment
docker run -d --privileged --pid=host \
-e OPENLIT_URL="<openlit-url>" \
-e OTEL_EXPORTER_OTLP_ENDPOINT="<openlit-url>:4318" \
-v /proc:/host/proc:ro \
-v /sys/kernel/debug:/sys/kernel/debug:ro \
-v /sys/fs/bpf:/sys/fs/bpf:rw \
-v /var/run/docker.sock:/var/run/docker.sock \
-e OPENLIT_PROC_ROOT="/host/proc" \
ghcr.io/openlit/controller:latest
#### Kubernetes Deployment
helm repo add openlit https://openlit.github.io/helm
helm repo update
helm upgrade --install openlit openlit/openlit \
--set openlit-controller.enabled=true
Sources: src/client/src/app/(playground)/agents/no-controller.tsx:27-45
SDK Integration
OpenLIT provides SDKs for both Python and TypeScript environments to enable application-level observability.
Python SDK
#### Installation
Install the Python SDK using pip:
pip install openlit
Sources: src/client/src/app/(playground)/getting-started/page.tsx:85-92
#### Initialization
Add the following initialization code to your application:
import openlit
openlit.init(otlp_endpoint="http://127.0.0.1:4318")
Alternatively, set the endpoint using the environment variable:
export OTEL_EXPORTER_OTLP_ENDPOINT="http://127.0.0.1:4318"
#### Complete Example with OpenAI
import openlit
from openai import OpenAI
openlit.init(otlp_endpoint="http://127.0.0.1:4318")
client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "What is LLM Observability?"}]
)
Sources: src/client/src/components/(playground)/getting-started/tracing/index.tsx:45-65
TypeScript SDK
#### Installation
Install the TypeScript SDK using npm:
npm install openlit
#### Initialization
Add the following initialization code to your application:
import openlit from 'openlit';
openlit.init({
otlpEndpoint: "http://127.0.0.1:4318"
});
Alternatively, set the endpoint using the environment variable OTEL_EXPORTER_OTLP_ENDPOINT.
#### Complete Example with OpenAI
import OpenAI from 'openai';
import openlit from 'openlit';
openlit.init({ otlpEndpoint: "http://127.0.0.1:4318" });
const client = new OpenAI({
apiKey: process.env.OPENAI_API_KEY
});
const chatCompletion = await client.chat.completions.create({
messages: [{ role: 'user', content: 'What is LLM Observability?' }],
model: 'gpt-3.5-turbo',
});
Sources: src/client/src/components/(playground)/getting-started/tracing/index.tsx:95-120
Configuration Reference
SDK Configuration Options
| Parameter | Type | Environment Variable | Description |
|---|---|---|---|
otlp_endpoint | string | OTEL_EXPORTER_OTLP_ENDPOINT | OTLP exporter endpoint URL |
api_key | string | OPENLIT_API_KEY | API key for authenticated endpoints |
Controller Environment Variables
| Variable | Description |
|---|---|
OPENLIT_URL | Base URL for the OpenLIT instance |
OTEL_EXPORTER_OTLP_ENDPOINT | OTLP endpoint for trace export |
OPENLIT_API_KEY | API key for OpenLIT authentication |
OPENLIT_PROC_ROOT | Root path for process information (default: /host/proc) |
Application Workflow
graph TD
A[Deploy OpenLIT with Docker Compose] --> B[Access OpenLIT UI]
B --> C{Choose Deployment Mode}
C -->|Local Development| D[Install SDK in Application]
C -->|System-wide| E[Deploy Controller]
D --> F[Initialize SDK]
F --> G[Instrument LLM Calls]
G --> H[View Traces & Metrics in UI]
E --> I[Auto-discover Services]
I --> J[View Infrastructure Metrics]Additional Resources
For more advanced configurations and use cases, refer to the following repositories:
Sources: src/client/src/app/(playground)/getting-started/page.tsx:100-115 Sources: src/client/src/app/not-found.tsx:20-35
Sources: [src/client/src/app/(playground)/getting-started/page.tsx:50-55]()
System Architecture
Related topics: Data Flow and Management, Python SDK Architecture
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Data Flow and Management, Python SDK Architecture
System Architecture
Overview
OpenLIT is an OpenTelemetry-native GenAI and LLM Application Observability tool designed to simplify the integration of observability into AI applications. The system enables developers to send OpenTelemetry traces and metrics from their LLM applications with minimal configuration changes.
The architecture follows a distributed microservices pattern with clear separation between data collection (SDK instrumentation), data transmission (OTLP protocol), and data visualization (frontend dashboard).
High-Level Architecture
graph TB
subgraph "Client Applications"
PythonApp["Python Application"]
TypeScriptApp["TypeScript/JS Application"]
end
subgraph "OpenLIT SDKs"
PythonSDK["Python SDK<br/>pip install openlit"]
TSSDK["TypeScript SDK<br/>npm install openlit"]
end
subgraph "Data Transport"
OTLP["OTLP Endpoint<br/>:4318"]
end
subgraph "OpenLIT Backend"
Frontend["Web Dashboard<br/>Port 3000"]
API["API Services"]
DB[( "ClickHouse<br/>Database" )]
end
PythonApp --> PythonSDK
TypeScriptApp --> TSSDK
PythonSDK --> OTLP
TSSDK --> OTLP
OTLP --> API
API --> DB
Frontend --> APICore Components
SDK Layer
OpenLIT provides language-specific SDKs for instrumenting AI applications:
| SDK | Package Manager | Installation | Repository |
|---|---|---|---|
| Python | pip | pip install openlit | sdk/python |
| TypeScript | npm | npm install openlit | sdk/typescript |
Python SDK Initialization
import openlit
openlit.init(otlp_endpoint="http://127.0.0.1:4318")
Sources: src/client/src/app/(playground)/getting-started/page.tsx:73-74
TypeScript SDK Initialization
import openlit from 'openlit';
openlit.init({
otlpEndpoint: "http://127.0.0.1:4318"
});
Sources: src/client/src/app/(playground)/getting-started/page.tsx:115-118
Data Transport Layer
The system uses the OpenTelemetry Protocol (OTLP) for transmitting telemetry data:
| Parameter | Default Value | Description |
|---|---|---|
| OTLP Endpoint | http://127.0.0.1:4318 | gRPC/HTTP endpoint for traces |
| Environment Variable | OTEL_EXPORTER_OTLP_ENDPOINT | Alternative endpoint configuration |
The OTLP endpoint can be configured either programmatically via SDK initialization or through environment variables.
Backend Services
#### Web Dashboard (Frontend)
The frontend is a Next.js application providing the user interface for:
- Tracing View - Visualize request traces and spans
- Agents Management - Configure and monitor AI agents
- Model Management - Configure AI model providers and pricing
- Getting Started - Onboarding documentation
- Chat Interface - Interactive testing environment
The application runs on port 3000 by default and provides a login interface with default credentials:
- Email: [email protected]
- Password: openlituser
Sources: src/client/src/app/(playground)/getting-started/page.tsx:40-44
#### Agent Lifecycle Management
OpenLIT supports managing AI agents with lifecycle operations:
stateDiagram-v2
[*] --> Starting
Starting --> Running
Running --> Restarting
Restarting --> Running
Running --> Stopping
Stopping --> [*]Lifecycle actions include:
- Start - Initialize the agent service
- Stop - Terminate with confirmation dialog
- Restart - Restart the agent process
Sources: src/client/src/app/(playground)/agents/lifecycle-actions.tsx:1-60
Controller Services
The OpenLIT Controller provides infrastructure-level observability for containerized and orchestrated environments:
| Deployment Method | Command/Configuration |
|---|---|
| Docker | docker run -d --privileged --pid=host ... ghcr.io/openlit/controller:latest |
| Kubernetes | helm upgrade --install openlit openlit/openlit --set openlit-controller.enabled=true |
| Systemd | Service unit file with systemctl enable |
Sources: src/client/src/app/(playground)/agents/no-controller.tsx:45-60
#### Controller Environment Variables
| Variable | Purpose |
|---|---|
OPENLIT_URL | Main OpenLIT instance URL |
OTEL_EXPORTER_OTLP_ENDPOINT | OTLP endpoint for telemetry |
OPENLIT_API_KEY | Authentication key (optional) |
OPENLIT_PROC_ROOT | Process root for host monitoring |
Deployment Architecture
Docker Compose Deployment
For development and testing, OpenLIT can be deployed using Docker Compose:
git clone [email protected]:openlit/openlit.git
cd openlit
docker compose up -d
Sources: src/client/src/app/(playground)/getting-started/page.tsx:50-55
Multi-Platform Support
graph LR
subgraph "Deployment Platforms"
Docker["Docker"]
K8s["Kubernetes"]
SystemD["Systemd"]
end
subgraph "Monitoring Targets"
Containers["Containers"]
Processes["Host Processes"]
Services["System Services"]
end
Docker --> Containers
K8s --> Containers
K8s --> Services
SystemD --> Services
SystemD --> ProcessesFeature Architecture
Tracing Integration
OpenLIT's tracing feature provides comprehensive observability:
| Feature | Description |
|---|---|
| Auto-Instrumentation | Automatic capture of LLM calls |
| Span Attributes | Model, provider, token usage, latency |
| Context Propagation | Request tracing across services |
| Error Tracking | Exception and failure monitoring |
Sources: src/client/src/components/(playground)/getting-started/tracing/index.tsx:1-100
Agent Schema Capture
The system captures tool schemas from agents for documentation and analysis:
interface ToolSchema {
name: string;
description?: string;
schema: object;
}
Schemas are displayed in an expandable accordion format with JSON visualization.
Sources: src/client/src/components/(playground)/agents/tools-card.tsx:35-55
Model Configuration
OpenLIT supports custom model configurations with pricing information:
| Field | Type | Description |
|---|---|---|
providerName | string | AI provider name |
modelId | string | Model identifier |
modelName | string | Display name |
inputPricePerMToken | number | Input cost per million tokens |
outputPricePerMToken | number | Output cost per million tokens |
contextWindow | number | Maximum context length |
Sources: src/client/src/components/(playground)/chat/message-input.tsx:25-45
Data Flow
sequenceDiagram
participant App as Application
participant SDK as OpenLIT SDK
participant OTLP as OTLP Endpoint
participant API as OpenLIT API
participant CH as ClickHouse
participant UI as Web Dashboard
App->>SDK: Initialize with config
App->>SDK: LLM API Call
SDK->>SDK: Capture trace/metrics
SDK->>OTLP: Export telemetry
OTLP->>API: Process spans
API->>CH: Store data
UI->>API: Query traces
API->>UI: Return results
UI->>UI: Render dashboardConfiguration Reference
SDK Configuration Options
| Parameter | Type | Default | Description |
|---|---|---|---|
otlp_endpoint | string | http://127.0.0.1:4318 | OTLP collector endpoint |
service_name | string | auto-detect | Service identifier |
api_key | string | none | Authentication for hosted services |
Environment Variables
| Variable | SDK Support | Description |
|---|---|---|
OTEL_EXPORTER_OTLP_ENDPOINT | Python, TS | Global OTLP endpoint override |
OPENLIT_API_KEY | All | API authentication key |
OPENLIT_SERVICE_NAME | All | Override service name |
Security Considerations
Authentication
The system supports multiple authentication providers:
- Email/Password - Local authentication with default credentials
- OAuth Providers - Google and GitHub SSO integration
Sources: src/client/src/components/(auth)/auth-form.tsx:1-50
API Security
API endpoints are protected and require valid session tokens. The controller service supports optional API key authentication:
-e OPENLIT_API_KEY="your-api-key"
Technology Stack
| Layer | Technology |
|---|---|
| Frontend | Next.js, React, TypeScript, TailwindCSS |
| SDKs | Python, TypeScript |
| Telemetry | OpenTelemetry Protocol (OTLP) |
| Database | ClickHouse |
| Containerization | Docker, Kubernetes |
| Service Management | Systemd |
External Resources
| Resource | URL |
|---|---|
| Documentation | https://docs.openlit.io |
| GitHub Repository | https://github.com/openlit/openlit |
| TypeScript SDK | https://github.com/openlit/openlit/tree/main/sdk/typescript |
| Python SDK | https://github.com/openlit/openlit/tree/main/sdk/python |
Sources: [src/client/src/app/(playground)/getting-started/page.tsx:73-74]()
Data Flow and Management
Related topics: System Architecture, Python SDK Architecture
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: System Architecture, Python SDK Architecture
Data Flow and Management
Overview
OpenLIT is an OpenTelemetry-native observability platform designed for GenAI and LLM applications. The data flow architecture encompasses the entire lifecycle of telemetry data—from instrumentation at the application level through processing, storage, and visualization in the frontend UI.
The system follows a standard OpenTelemetry Collector pattern with platform-specific optimizations for handling GenAI-specific semantic conventions and metrics. Data flows through multiple layers: SDK instrumentation, OTLP export, backend processing, ClickHouse storage, and client-side data management for the playground UI.
Architecture Overview
graph TD
subgraph Application_Layer["Application Layer"]
PySDK["Python SDK"]
TsSDK["TypeScript SDK"]
end
subgraph Instrumentation["Instrumentation"]
LangGraph["LangGraph"]
ClaudeAgent["Claude Agent SDK"]
LlamaIndex["LlamaIndex"]
OpenAI["OpenAI"]
end
subgraph Export["OTLP Export"]
OTLP["OTLP Endpoint<br/>:4318"]
end
subgraph Backend["OpenLIT Backend"]
Processor["Data Processor"]
Storage["ClickHouse"]
end
subgraph Frontend["Frontend Client"]
Client["Playground UI"]
APIClient["API Client"]
end
PySDK -->|HTTP/gRPC| OTLP
TsSDK -->|HTTP/gRPC| OTLP
LangGraph --> PySDK
ClaudeAgent --> PySDK
OpenAI --> PySDK
LlamaIndex --> TsSDK
OTLP --> Processor
Processor --> Storage
Storage --> APIClient
APIClient --> ClientTracing Data Flow
Python SDK Tracing Architecture
The Python SDK provides comprehensive tracing capabilities through the OpenTelemetry SDK integration. The tracing module (tracing.py) establishes the foundation for all trace collection and export operations.
Core Tracing Components:
| Component | Purpose | Location |
|---|---|---|
TracerProvider | Manages trace creation and propagation | sdk/python/src/openlit/otel/tracing.py |
SpanProcessor | Processes individual spans before export | sdk/python/src/openlit/otel/tracing.py |
OTLPExporter | Exports spans to OTLP endpoint | sdk/python/src/openlit/otel/tracing.py |
ContextPropagation | Maintains trace context across async operations | sdk/python/src/openlit/otel/tracing.py |
The tracing initialization follows a standard pattern:
import openlit
openlit.init(otlp_endpoint="http://127.0.0.1:4318")
This initialization configures the tracer provider with the specified OTLP endpoint, enabling automatic span collection from all instrumented LLM frameworks.
Sources: sdk/python/src/openlit/otel/tracing.py
Span Lifecycle
Spans are created and managed through a structured lifecycle that ensures complete telemetry capture:
sequenceDiagram
participant App as Application Code
participant SDK as OpenLIT SDK
participant Inst as Instrumentation
participant Exporter as OTLP Exporter
participant Backend as OpenLIT Backend
App->>Inst: LLM/Framework Call
Inst->>SDK: Create Span
SDK->>SDK: Set Attributes
SDK->>SDK: Record Metrics
App->>SDK: Response Received
SDK->>SDK: Complete Span
SDK->>Exporter: Export Span
Exporter->>Backend: OTLP StreamThe span lifecycle includes:
- Creation: Span is initialized with parent context
- Attribute Setting: GenAI-specific attributes (model, tokens, cost) are attached
- Timing: Start and end times are recorded for duration calculation
- Status: Span status is set based on success/failure
- Export: Spans are batched and exported to OTLP endpoint
Sources: sdk/python/src/openlit/instrumentation/langgraph/__init__.py
Instrumentation Framework Integration
OpenLIT provides instrumentation for multiple LLM frameworks, each with framework-specific span attributes:
Supported Instrumentations:
| Framework | Operations Traced | Semantic Convention |
|---|---|---|
| OpenAI | chat completions, embeddings | gen_ai.operation.type |
| LangGraph | execution, checkpointing, construction | framework + gen_ai |
| Claude Agent SDK | invoke_agent, execute_tool | gen_ai.operation.type |
| LlamaIndex | query_engine, retriever, document | retrieve + framework |
LangGraph Instrumentation Pattern:
The LangGraph instrumentation wraps execution operations with both sync and async variants:
# From langgraph/__init__.py
def _wrap_execution_operations(self, operations, ...):
for module, method, operation_type, sync_type in operations:
if sync_type == "async":
wrapper = async_general_wrap(operation_type, ...)
else:
wrapper = general_wrap(operation_type, ...)
This pattern ensures consistent telemetry regardless of whether the underlying framework uses synchronous or asynchronous execution models.
Sources: sdk/python/src/openlit/instrumentation/langgraph/__init__.py
Metrics Data Flow
Metrics Collection Architecture
The metrics module handles quantitative measurements that complement trace data. Metrics provide aggregated views of system performance, cost, and usage patterns.
Metrics Data Points:
| Metric Type | Description | Aggregation |
|---|---|---|
| Request Count | Total number of LLM requests | Count |
| Token Usage | Input/output tokens consumed | Sum |
| Cost | Calculated cost based on pricing | Sum |
| Latency | Request duration in milliseconds | Histogram |
| Error Rate | Failed requests percentage | Ratio |
Sources: sdk/python/src/openlit/otel/metrics.py
Metric Recording Flow
Metrics are recorded during span processing using the OpenTelemetry Metrics API:
graph LR
A[LLM Request] --> B[Create Span]
B --> C[Extract Request Data]
C --> D[Calculate Pricing]
D --> E[Record Metrics]
E --> F[Complete Span]
G[Pricing Info] --> D
H[Model Config] --> DThe metric recording includes:
start_timeandend_timefor duration calculationrequest_modelfor token and pricing lookupenvironmentandapplication_namefor filteringpricing_infodictionary for cost calculation
Sources: sdk/python/src/openlit/instrumentation/openai/async_openai.py
Client-Side Data Management
Frontend API Client Architecture
The frontend client manages data fetching and state management for the playground UI. The API client layer provides a typed interface to the backend services.
API Client Structure:
// Simplified from request/index.ts
export class RequestClient {
async fetchTraces(params: TraceParams): Promise<Trace[]>;
async fetchMetrics(params: MetricParams): Promise<Metrics>;
async fetchSpans(traceId: string): Promise<Span[]>;
}
Key Data Operations:
| Operation | Endpoint | Purpose |
|---|---|---|
| Fetch Traces | /api/traces | List traces with filtering |
| Fetch Spans | /api/traces/:id/spans | Get detailed span data |
| Fetch Metrics | /api/metrics | Aggregated metrics data |
| Export Data | /api/openground/models/export | Export pricing data |
Sources: src/client/src/lib/platform/request/index.ts
ClickHouse Data Access
The client uses ClickHouse as the primary data store and accesses it through helper functions that construct and execute queries.
Query Helper Functions:
| Function | Purpose |
|---|---|
buildTraceQuery() | Construct trace listing query |
buildSpanQuery() | Construct span detail query |
applyFilters() | Apply time range and attribute filters |
parseResponse() | Parse ClickHouse response format |
Sources: src/client/src/lib/platform/clickhouse/helpers.ts
State Management Pattern
The frontend uses React Query or similar state management for data fetching:
graph TD
A[Component Mount] --> B[Trigger Query]
B --> C[Show Loading State]
C --> D{Request Complete?}
D -->|Yes| E[Update Cache]
E --> F[Render Data]
D -->|No| G[Show Error]
G --> H[Retry Option]The state management includes:
- Loading states: Visual feedback during data fetch
- Error handling: Graceful degradation on failures
- Cache invalidation: Automatic refresh on mutations
- Pagination: Support for large result sets with "Load More" patterns
Sources: src/client/src/components/(playground)/agents/version-drawer.tsx/agents/version-drawer.tsx)
Timeline View Data Structure
Span Timeline Rendering
The timeline view component renders trace data as a visual timeline, parsing span data into a hierarchical structure.
Span Data Model:
interface SpanData {
spanId: string;
parentSpanId?: string;
startTime: number;
endTime: number;
name: string;
kind: 'client' | 'server' | 'producer' | 'consumer';
status: 'ok' | 'error';
attributes: Record<string, any>;
duration: number;
cost?: number;
}
Timeline Calculation:
| Column | Width | Content |
|---|---|---|
| Name Column | 30% | Span name and kind indicator |
| Timeline Column | 60% | Visual timeline bar |
| Stats Column | 10% | Duration and cost |
The timeline calculates relative positions using traceWindowMs to determine the overall trace window, then positions each span proportionally within that window.
Sources: src/client/src/components/(playground)/request/components/timeline-view.tsx/request/components/timeline-view.tsx)
TypeScript SDK Data Flow
LlamaIndex Instrumentation
The TypeScript SDK provides similar capabilities for JavaScript/TypeScript applications, particularly for LlamaIndex integration.
LlamaIndex Traced Operations:
| Operation | Semantic Convention | Description |
|---|---|---|
document_load | retrieve | Document loading operations |
document_split | framework | Text splitting/splitting |
retriever_retrieve | retrieve | Retrieval operations |
query_engine_query | retrieve | Query execution |
response_synthesize | chat | Response generation |
Sources: sdk/typescript/src/instrumentation/llamaindex/index.ts
TypeScript Initialization Pattern
import openlit from 'openlit';
// Initialize with OTLP endpoint
openlit.init({
otlpEndpoint: "http://127.0.0.1:4318"
});
// Or use environment variable
// OTEL_EXPORTER_OTLP_ENDPOINT="http://127.0.0.1:4318"
Environment Configuration
Data Flow Configuration Options
| Environment Variable | Default | Purpose |
|---|---|---|
OTEL_EXPORTER_OTLP_ENDPOINT | http://127.0.0.1:4318 | OTLP gRPC endpoint |
OTEL_EXPORTER_OTLP_PROTOCOL | grpc | Protocol (grpc/http/proto) |
OTEL_SERVICE_NAME | default | Service identification |
OTEL_EXPORTER_OTLP_HEADERS | - | Authentication headers |
Sources: src/client/src/app/(playground)/getting-started/page.tsx/getting-started/page.tsx)
Data Management Best Practices
Efficient Data Handling
- Batching: Spans are batched before export to reduce network overhead
- Sampling: Configure appropriate sampling rates for high-volume applications
- Filtering: Apply attribute filters at the query layer to reduce data transfer
- Pagination: Use paginated queries for large result sets
Error Handling Flow
graph TD
A[Span Error] --> B[Record Exception]
B --> C[Set Span Status ERROR]
C --> D[Record Error Metrics]
D --> E[Export Span]
E --> F{Backend Available?}
F -->|Yes| G[Store Data]
F -->|No| H[Retry Queue]
H -->|Retry| GThe error handling ensures that even when backend connectivity fails, error information is preserved for debugging.
Summary
The data flow in OpenLIT follows a well-structured pipeline from SDK instrumentation through to frontend visualization. Key aspects include:
- Unified Telemetry: Both traces and metrics are collected through OpenTelemetry SDKs
- Framework Integration: Multiple LLM frameworks are automatically instrumented
- Efficient Export: OTLP protocol ensures standardized data transfer
- Flexible Storage: ClickHouse provides scalable storage and querying
- Responsive UI: The playground client efficiently fetches and displays telemetry data
This architecture enables comprehensive observability for GenAI applications while maintaining performance and scalability through batching, caching, and pagination strategies.
Source: https://github.com/openlit/openlit / Human Manual
Python SDK Architecture
Related topics: TypeScript SDK Architecture, Go SDK Architecture, LLM and Framework Integrations
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: TypeScript SDK Architecture, Go SDK Architecture, LLM and Framework Integrations
Python SDK Architecture
概述
OpenLIT Python SDK 是一个 OpenTelemetry 原生的 GenAI 和 LLM 应用可观测性工具。该 SDK 通过自动插桩框架集成到各种 AI 应用中,自动捕获 OpenTelemetry traces 和 metrics,无需手动埋点。
核心职责包括:
- 自动插桩主流 AI SDK(OpenAI、Anthropic、LangChain、CrewAI 等)
- 遵循 OTel GenAI 语义约定(Semantic Conventions)
- 提供基于 OpenTelemetry 的 tracing 和 metrics 收集
- 实现生产级 guardrails(内容安全、审计)
Sources: sdk/python/src/openlit/instrumentation/claude_agent_sdk/__init__.py:1-15
核心架构组件
graph TD
subgraph "OpenLIT Python SDK"
A["openlit.init()"]
B["Instrumentors<br/>BaseInstrumentor"]
C["Guard System"]
D["OTel Layer"]
end
subgraph "Instrumented Frameworks"
E["OpenAI"]
F["Anthropic"]
G["Claude Agent SDK"]
H["LangChain / CrewAI"]
I["Google ADK"]
J["Agent Framework"]
end
subgraph "OpenTelemetry Backend"
K["OTLP Exporter"]
L["Traces"]
M["Metrics"]
end
A --> B
A --> C
B --> D
C --> D
D --> K
K --> L
K --> M
B --> E
B --> F
B --> G
B --> H
B --> I
B --> J组件说明
| 组件 | 位置 | 职责 |
|---|---|---|
| Instrumentors | openlit.instrumentation.* | 各 AI 框架的自动插桩实现 |
| Guard System | openlit.guard.* | 内容安全、审计和合规检查 |
| OTel Layer | openlit.otel.* | OpenTelemetry traces 和 metrics 的核心实现 |
| Config | openlit._config | 全局配置管理和指标字典 |
| Semcov | openlit.semcov | GenAI 语义约定常量定义 |
初始化流程
Python SDK 初始化
import openlit
openlit.init(otlp_endpoint="http://127.0.0.1:4318")
初始化时 SDK 执行以下操作:
- 配置 OpenTelemetry tracer provider
- 加载全局配置(环境、应用名称、指标开关)
- 注入所有依赖的 instrumentors
- 初始化 guard pipeline(如配置)
Sources: sdk/python/src/openlit/instrumentation/claude_agent_sdk/__init__.py:30-42
配置参数
| 参数 | 类型 | 默认值 | 说明 |
|---|---|---|---|
otlp_endpoint | str | "http://127.0.0.1:4318" | OTLP gRPC endpoint |
environment | str | "default" | 部署环境标识 |
application_name | str | "default" | 应用名称 |
pricing_info | dict | {} | 模型定价信息 |
capture_message_content | bool | False | 是否捕获消息内容 |
metrics | dict | None | 指标配置字典 |
disable_metrics | bool | None | 禁用指标收集 |
guards | list | None | Guard 配置列表 |
插桩系统架构
BaseInstrumentor 模式
所有框架插桩器继承自 BaseInstrumentor,采用统一模式:
class ClaudeAgentSDKInstrumentor(BaseInstrumentor):
def instrumentation_dependencies(self) -> Collection[str]:
return _instruments # 如 ("claude-agent-sdk >= 0.1.0",)
def _instrument(self, **kwargs):
# 1. 获取 tracer 和配置
tracer = trace.get_tracer(__name__)
# 2. 使用 wrapt 包装目标函数
wrap_function_wrapper(
"module.path",
"function_name",
wrap_query
)
Sources: sdk/python/src/openlit/instrumentation/claude_agent_sdk/__init__.py:27-45
插桩覆盖范围
| 框架 | 支持版本 | 追踪操作 |
|---|---|---|
| Claude Agent SDK | >= 0.1.0 | invoke_agent, execute_tool |
| Google ADK | - | execute_tool |
| Agent Framework | - | agent_init, agent_run, tool_execute, workflow_run |
| CrewAI | - | Agent 和 Tool 调用 |
| LangGraph | - | Graph 节点执行 |
Span 命名规范
遵循 OTel GenAI 语义约定生成规范化的 span 名称:
| 操作类型 | Span 名称格式 | 示例 |
|---|---|---|
| Agent 创建 | create_agent {name} | create_agent my_agent |
| Agent 调用 | invoke_agent {name} | invoke_agent my_agent |
| Tool 执行 | execute_tool {name} | execute_tool calculator |
| Workflow | invoke_workflow {name} | invoke_workflow pipeline |
Sources: sdk/python/src/openlit/instrumentation/agent_framework/utils.py:1-60
语义约定属性
所有 span 遵循 gen_ai.* 语义约定:
| 属性键 | 说明 | 示例值 |
|---|---|---|
gen_ai.operation.name | 操作类型 | invoke_agent, execute_tool |
gen_ai.operation.type | 操作分类 | agent, tool |
gen_ai.system | AI 系统 | openai, anthropic, google.adk |
gen_ai.provider.name | 提供商名称 | google |
gen_ai.tool.name | 工具名称 | calculator |
gen_ai.tool.type | 工具类型 | function |
gen_ai.tool.description | 工具描述 | Truncated 描述文本 |
gen_ai.tool.call.arguments | 工具调用参数 | JSON 字符串 |
Sources: sdk/python/src/openlit/instrumentation/google_adk/utils.py:1-50
Guard 系统
OpenLIT 提供生产级 guardrails 用于 LLM 应用安全:
import openlit
openlit.init(guards=[openlit.PII(action="redact")])
可用 Guard 类型
| Guard 类 | 位置 | 功能 |
|---|---|---|
PII | openlit.guard.pii | 个人身份信息检测和脱敏 |
PromptInjection | openlit.guard.prompt_injection | 提示注入攻击检测 |
SensitiveTopic | openlit.guard.sensitive_topic | 敏感话题检测 |
TopicRestriction | openlit.guard.topic_restriction | 话题限制 |
Moderation | openlit.guard.moderation | 内容审核 |
Schema | openlit.guard.schema | 输出结构验证 |
Custom | openlit.guard.custom | 自定义 guard 逻辑 |
Guard 核心类型
from openlit.guard import (
Guard,
GuardAction,
GuardConfigError,
GuardDeniedError,
GuardPhase,
GuardResult,
GuardTimeoutError,
PipelineResult,
)
| 类型 | 说明 |
|---|---|
Guard | Base guard 类 |
GuardAction | Guard 执行动作 |
GuardPhase | 执行阶段(pre/post) |
GuardResult | Guard 执行结果 |
PipelineResult | Pipeline 聚合结果 |
Sources: sdk/python/src/openlit/guard/__init__.py:1-60
Pipeline 机制
Guard 使用 Pipeline 模式按序执行多个 guard:
from openlit.guard import Pipeline
pipeline = Pipeline([
PII(action="redact"),
PromptInjection(threshold=0.8),
Moderation()
])
Claude Agent SDK 插桩详解
架构设计
sequenceDiagram
participant User as User Code
participant SDK as Claude Agent SDK
participant Wrap as wrap_query
participant Hook as _ToolSpanTracker
participant Span as OTel Span
User->>SDK: query(...)
SDK->>Wrap: invoke wrapper
Wrap->>Span: create invoke_agent span
Wrap->>SDK: proceed with query
SDK->>Hook: PreToolUse event
Hook->>Span: create execute_tool span
SDK->>Hook: PostToolUse event
Hook->>Span: finalize tool span
SDK-->>Wrap: response
Wrap->>Span: finalize agent span
Wrap-->>User: return responseTool Span 追踪
使用 _ToolSpanTracker 管理 in-flight tool spans:
class _ToolSpanTracker:
"""Manages in-flight tool spans created by SDK hooks."""
def __init__(
self,
tracer,
parent_span,
version,
environment,
application_name,
capture_message_content
):
# 初始化追踪器
Fallback 机制
当 SDK hooks 无法注入时,使用消息流回退方案:
# 检查 hooks 是否已注入
if hasattr(client, _HOOKS_INJECTED_ATTR):
# 使用 hooks 追踪
else:
# 使用消息流追踪
Sources: sdk/python/src/openlit/instrumentation/claude_agent_sdk/claude_agent_sdk.py:1-80
OpenTelemetry 集成
Tracing 实现
SDK 使用 OpenTelemetry Python API 创建 spans:
from opentelemetry import trace as trace_api
from opentelemetry.trace import SpanKind, Status, StatusCode
tracer = trace.get_tracer(__name__)
with tracer.start_as_current_span(
name="invoke_agent",
kind=SpanKind.CLIENT
) as span:
span.set_attribute(...)
# 执行操作
Metrics 实现
支持以下指标类型:
| 指标类型 | 指标名称 | 说明 |
|---|---|---|
| Counter | gen_ai.*.token_usage | Token 使用计数 |
| Histogram | gen_ai.*.duration | 请求耗时分布 |
| Gauge | - | 当前活跃请求数 |
语义约定常量
所有语义约定常量集中定义在 openlit.semcov 模块:
class SemanticConvention:
GEN_AI_OPERATION = "gen_ai.operation.name"
GEN_AI_SYSTEM = "gen_ai.system"
GEN_AI_TOOL_NAME = "gen_ai.tool.name"
GEN_AI_TOOL_TYPE = "gen_ai.tool.type"
GEN_AI_SYSTEM_VALUE = "gen_ai.system.openai"
错误处理
Exception 传播
SDK 使用统一的异常处理机制:
from openlit.__helpers import handle_exception
def some_wrapper(func, *args, **kwargs):
try:
return func(*args, **kwargs)
except Exception as e:
handle_exception(span, e)
raise
Guard 特定错误
| 错误类型 | 说明 |
|---|---|
GuardError | 基础 guard 错误 |
GuardDeniedError | Guard 拒绝请求 |
GuardTimeoutError | Guard 执行超时 |
GuardConfigError | Guard 配置错误 |
使用示例
基础集成
from openai import OpenAI
import openlit
openlit.init(otlp_endpoint="http://127.0.0.1:4318")
client = OpenAI(api_key="YOUR_OPENAI_KEY")
chat_completion = client.chat.completions.create(
messages=[{"role": "user", "content": "Hello!"}],
model="gpt-3.5-turbo"
)
带 Guard 的集成
import openlit
from openlit.guard import PII, PromptInjection
openlit.init(
otlp_endpoint="http://127.0.0.1:4318",
guards=[
PII(action="redact"),
PromptInjection(threshold=0.7)
]
)
环境变量配置
export OTEL_EXPORTER_OTLP_ENDPOINT="http://127.0.0.1:4318"
import openlit
openlit.init() # 自动读取环境变量
扩展开发
自定义 Instrumentor
from opentelemetry.instrumentation.instrumentor import BaseInstrumentor
from wrapt import wrap_function_wrapper
class CustomSDKInstrumentor(BaseInstrumentor):
def instrumentation_dependencies(self):
return ("custom-sdk >= 1.0.0",)
def _instrument(self, **kwargs):
tracer = kwargs.get("tracer")
wrap_function_wrapper(
"custom_sdk",
"Client.query",
wrap_custom_query
)
自定义 Guard
from openlit.guard import Guard, GuardAction, GuardResult
class CustomGuard(Guard):
def _evaluate(self, text: str) -> GuardResult:
# 自定义检测逻辑
if "forbidden" in text.lower():
return GuardResult(
action=GuardAction.DENY,
reason="Forbidden content detected"
)
return GuardResult(action=GuardAction.ALLOW)Sources: [sdk/python/src/openlit/instrumentation/claude_agent_sdk/__init__.py:1-15]()
TypeScript SDK Architecture
Related topics: Python SDK Architecture, Go SDK Architecture, LLM and Framework Integrations
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Python SDK Architecture, Go SDK Architecture, LLM and Framework Integrations
TypeScript SDK Architecture
Overview
The OpenLIT TypeScript SDK provides an OpenTelemetry-native observability solution for GenAI and LLM applications. It enables developers to instrument their TypeScript/JavaScript applications with automatic tracing and metrics collection, forwarding telemetry data to OpenLIT or any OTLP-compatible backend.
Key Characteristics:
| Attribute | Value |
|---|---|
| Package Name | openlit |
| Installation | npm install openlit |
| Entry Point | sdk/typescript/src/index.ts |
| Primary Dependency | OpenTelemetry SDK |
| Transport Protocol | OTLP (OpenTelemetry Protocol) |
Sources: sdk/typescript/package.json
Core Architecture
The SDK follows a modular architecture with clear separation of concerns:
graph TD
A[Application Code] --> B[openlit.init]
B --> C[Config Module]
C --> D[Instrumentation Module]
D --> E[Guard Module]
E --> F[OTLP Exporter]
F --> G[OpenLIT Backend / OTEL Collector]
C --> C1[OTLP Endpoint]
C --> C2[Custom Attributes]
C --> C3[Service Name]
D --> D1[LLM Instrumentation]
D --> D2[Vector DB Instrumentation]
D --> D3[Framework Hooks]Entry Point Module
The main entry point (index.ts) exposes a simple initialization API:
import openlit from 'openlit';
openlit.init({
otlpEndpoint: "http://127.0.0.1:4318"
});
Sources: sdk/typescript/src/index.ts
Configuration Module
The config module (config.ts) handles SDK configuration including:
| Parameter | Type | Default | Description |
|---|---|---|---|
otlpEndpoint | string | Environment variable OTEL_EXPORTER_OTLP_ENDPOINT | OTLP-compatible endpoint URL |
serviceName | string | Application-defined | Name of the instrumented service |
resourceAttributes | Record<string, string> | {} | Custom resource attributes |
Sources: sdk/typescript/src/config.ts
Instrumentation Subsystem
The instrumentation module (instrumentation/index.ts) provides automatic observability for AI workloads:
Supported Integrations
| Category | Instrumented Components |
|---|---|
| LLM Providers | OpenAI, Anthropic, Azure OpenAI, Google AI, AWS Bedrock, Cohere, Ollama |
| Vector Databases | ChromaDB, Pinecone, Weaviate, Qdrant, Milvus, PGVector |
| Frameworks | LangChain, LlamaIndex, LangFlow, AutoGen |
Sources: sdk/typescript/src/instrumentation/index.ts
Tracing Capabilities
The SDK automatically captures:
- LLM Request/Response traces with prompt and completion data
- Token usage metrics (prompt tokens, completion tokens, total tokens)
- Latency measurements for API calls
- Embeddings generation traces with vector dimensions
- Tool/function calling traces with parameters and results
Guard Module
The guard module (guard/index.ts) provides safety and compliance features:
import { openlit } from 'openlit';
// Initialize with guardrails
openlit.init({
otlpEndpoint: "http://127.0.0.1:4318"
});
Guard capabilities include:
- Input/output validation for LLM interactions
- Content filtering hooks
- Rate limiting enforcement
- Custom rule application
Sources: sdk/typescript/src/guard/index.ts
Initialization Flow
sequenceDiagram
participant App as Application
participant SDK as OpenLIT SDK
participant Config as Config Module
participant Inst as Instrumentation
participant OTEL as OTEL SDK
App->>SDK: openlit.init(options)
SDK->>Config: Validate & merge config
Config->>Config: Check env vars
Config-->>SDK: Resolved config
SDK->>OTEL: Initialize OTEL SDK
SDK->>Inst: Register instrumentations
Inst->>OTEL: Add span processors
OTEL-->>SDK: Ready
SDK-->>App: Initialization completeEnvironment Variable Support
The SDK supports configuration via environment variables as an alternative to programmatic configuration:
| Environment Variable | Description |
|---|---|
OTEL_EXPORTER_OTLP_ENDPOINT | OTLP endpoint URL |
OTEL_SERVICE_NAME | Service name for traces |
Sources: src/client/src/components/(playground)/getting-started/tracing/index.tsx:42/getting-started/tracing/index.tsx)
Usage Patterns
Basic Initialization
import openlit from 'openlit';
openlit.init({
otlpEndpoint: "http://127.0.0.1:4318"
});
OpenAI Integration Example
import OpenAI from 'openai';
import openlit from 'openlit';
openlit.init({ otlpEndpoint: "http://127.0.0.1:4318" });
const client = new OpenAI({
apiKey: process.env.OPENAI_API_KEY
});
const chatCompletion = await client.chat.completions.create({
messages: [{ role: 'user', content: 'What is LLM Observability?' }],
model: 'gpt-3.5-turbo',
});
Sources: src/client/src/components/(playground)/getting-started/tracing/index.tsx:28-39/getting-started/tracing/index.tsx)
Package Dependencies
Key dependencies in package.json:
{
"dependencies": {
"@opentelemetry/sdk-node": "^0.50.0",
"@opentelemetry/exporter-trace-otlp-http": "^0.50.0",
"@opentelemetry/resources": "^1.22.0",
"@opentelemetry/semantic-conventions": "^1.22.0"
}
}
Sources: sdk/typescript/package.json
Design Principles
- Zero-Configuration Defaults: The SDK works out-of-the-box with sensible defaults
- OpenTelemetry Native: Built on OTEL SDK for vendor-agnostic telemetry export
- Automatic Instrumentation: No code changes required for supported libraries
- Environment Variable Fallback: Configuration can be entirely environment-based
- Minimal Footprint: Instrumentation adds minimal latency overhead
Summary
The OpenLIT TypeScript SDK architecture provides a developer-friendly interface for adding observability to GenAI applications. By abstracting OpenTelemetry complexity and providing automatic instrumentation for popular LLM providers and vector databases, it enables comprehensive telemetry collection with minimal configuration. The SDK exports all data via OTLP, ensuring compatibility with OpenLIT's backend as well as any other OTEL-compatible observability platform.
Sources: [sdk/typescript/package.json](https://github.com/openlit/openlit/blob/main/sdk/typescript/package.json)
Go SDK Architecture
Related topics: Python SDK Architecture, TypeScript SDK Architecture
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Python SDK Architecture, TypeScript SDK Architecture
Go SDK Architecture
Overview
The OpenLIT Go SDK is a lightweight instrumentation library that enables observability for GenAI applications built with Go. It provides automatic tracing and metrics collection for LLM calls, supporting OpenAI and Anthropic providers out of the box. The SDK follows OpenTelemetry-native principles, allowing seamless integration with the OpenLIT observability platform.
Core Components
The Go SDK is organized into several key packages:
| Component | Purpose |
|---|---|
openlit | Core initialization, configuration, and shutdown |
openlit.Config | Central configuration struct for SDK settings |
openlit.EvaluateRule() | Standalone rule engine evaluation function |
instrumentation/openai | OpenAI client instrumentation |
instrumentation/anthropic | Anthropic client instrumentation |
Initialization Flow
The SDK must be initialized before instrumenting any LLM clients. The initialization process configures the OTLP endpoint and establishes the connection to the OpenLIT backend.
err := openlit.Init(openlit.Config{
OtlpEndpoint: "http://127.0.0.1:4318",
Environment: "production",
ApplicationName: "my-go-app",
})
if err != nil {
log.Fatalf("Failed to initialize OpenLIT: %v", err)
}
defer openlit.Shutdown(context.Background())
Sources: sdk/go/README.md
Configuration Options
The openlit.Config struct provides the following configuration parameters:
| Parameter | Type | Description |
|---|---|---|
OtlpEndpoint | string | OTLP collector endpoint (default: http://127.0.0.1:4318) |
Environment | string | Deployment environment name |
ApplicationName | string | Application identifier for grouping traces |
PricingInfo | map[string]ModelPricing | Custom pricing configuration per model |
OtlpHeaders | map[string]string | Custom headers for OTLP exports |
Custom Pricing Configuration
The SDK supports custom pricing information for models that require non-default cost calculations:
config := openlit.Config{
PricingInfo: map[string]openlit.ModelPricing{
"gpt-4-custom": {
InputCostPerToken: 0.00003,
OutputCostPerToken: 0.00006,
},
},
}
Sources: sdk/go/README.md
Custom Headers for OTLP Exports
Authentication and custom headers can be added to OTLP exports:
config := openlit.Config{
OtlpHeaders: map[string]string{
"Authorization": "Bearer token",
"X-Custom-Header": "value",
},
}
Sources: sdk/go/README.md
Instrumentation Architecture
The SDK uses a decorator/wrapper pattern for instrumenting LLM clients. This approach allows automatic tracing without modifying the original client interface.
graph TD
A[User Application] --> B[Instrumented Client]
B --> C[Original SDK Client]
B --> D[OpenLIT Tracer]
D --> E[OTLP Exporter]
E --> F[OpenLIT Backend]
C --> G[LLM Provider API]
G --> COpenAI Instrumentation
The OpenAI instrumentation wraps the sashabaranov/go-openai client:
import (
"github.com/openlit/openlit/sdk/go/instrumentation/openai"
openai_sdk "github.com/sashabaranov/go-openai"
)
// Create and instrument OpenAI client
client := openai_sdk.NewClient("your-api-key")
instrumentedClient := openai.Instrument(client)
// Use as normal - automatically traced!
resp, err := instrumentedClient.CreateChatCompletion(ctx, openai_sdk.ChatCompletionRequest{
Model: openai_sdk.GPT4,
Messages: []openai_sdk.ChatCompletionMessage{
{
Role: openai_sdk.ChatMessageRoleUser,
Content: "Hello!",
},
},
})
Sources: sdk/go/README.md
Anthropic Instrumentation
The Anthropic instrumentation follows the same pattern:
import (
"github.com/openlit/openlit/sdk/go/instrumentation/anthropic"
)
// Create and instrument Anthropic client
client := anthropic.NewClient("your-api-key")
instrumentedClient := anthropic.Instrument(client)
Rule Engine Integration
The SDK provides a standalone rule evaluation function that does not require initialization:
// EvaluateRule does NOT require openlit.Init()
rules, err := openlit.EvaluateRule(ctx, &openlit.EvaluateRuleRequest{
TraceAttributes: attributes,
})
This function evaluates trace attributes against the OpenLIT Rule Engine to retrieve matching rules and associated entities including contexts, prompts, and evaluation configurations.
Sources: sdk/go/README.md
Integration with OpenLIT Dashboard
The complete observability workflow involves:
``bash docker compose up -d ``
- Start OpenLIT Stack: Deploy using Docker Compose
``go openlit.Init(openlit.Config{ OtlpEndpoint: "http://localhost:4318", }) ``
- Configure SDK: Initialize the Go SDK with the OTLP endpoint
- View Traces: Access the dashboard at
http://localhost:3000
Sources: sdk/go/README.md
Example Projects
The SDK includes complete working examples in the examples/ directory:
| Example | Path |
|---|---|
| OpenAI Chat Completion | examples/openai/chat/ |
| OpenAI Streaming | examples/openai/streaming/ |
| Anthropic Messages | examples/anthropic/messages/ |
| Anthropic Streaming | examples/anthropic/streaming/ |
Module Dependencies
The Go SDK depends on core OpenTelemetry packages for trace export and propagation:
- OpenTelemetry OTLP exporter
- OpenTelemetry trace propagation
- Context propagation utilities
Sources: sdk/go/go.mod
Sources: [sdk/go/README.md](https://github.com/openlit/openlit/blob/main/sdk/go/README.md)
LLM and Framework Integrations
Related topics: Python SDK Architecture, TypeScript SDK Architecture
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Python SDK Architecture, TypeScript SDK Architecture
LLM and Framework Integrations
OpenLIT provides comprehensive instrumentation for a wide range of LLMs and AI frameworks, enabling automatic OpenTelemetry-native observability for GenAI applications. This page documents the architecture, supported integrations, and implementation patterns.
Overview
OpenLIT's instrumentation layer wraps SDK calls from various LLM providers and AI frameworks to automatically capture traces and metrics without requiring manual instrumentation code.
Supported Integrations
| Category | Integration | Python SDK | TypeScript SDK | Go SDK |
|---|---|---|---|---|
| LLM Providers | OpenAI | ✅ | ✅ | ✅ |
| LLM Providers | Anthropic | ✅ | ✅ | ✅ |
| LLM Providers | Azure OpenAI | ✅ | ✅ | ✅ |
| LLM Providers | Vertex AI | ✅ | ✅ | ✅ |
| LLM Providers | Mistral AI | ✅ | ✅ | ✅ |
| LLM Providers | Cohere | ✅ | ✅ | ✅ |
| LLM Providers | HuggingFace | ✅ | ✅ | ✅ |
| AI Frameworks | LangChain | ✅ | ✅ | - |
| AI Frameworks | LlamaIndex | ✅ | - | - |
| AI Frameworks | CrewAI | ✅ | - | - |
| AI Frameworks | LangGraph | ✅ | - | - |
| AI Frameworks | Claude Agent SDK | ✅ | - | - |
| Vector Stores | Pinecone | ✅ | - | - |
| Vector Stores | Chroma | ✅ | - | - |
| Vector Stores | Qdrant | ✅ | - | - |
| Vector Stores | Weaviate | ✅ | - | - |
Sources: sdk/python/README.md
Architecture
Instrumentation Pattern
All instrumentations follow a consistent pattern based on OpenTelemetry's BaseInstrumentor class:
graph TD
A[Application Code] --> B[Instrumented SDK]
B --> C[Wrapper Function]
C --> D[OpenTelemetry Tracer]
C --> E[Metrics Recorder]
D --> F[OTLP Exporter]
E --> F
F --> G[OpenLIT Backend]Core Components
| Component | Purpose | Location |
|---|---|---|
BaseInstrumentor | Base class for all instrumentors | opentelemetry.instrumentation.instrumentor |
wrap_function_wrapper | Wraps SDK functions dynamically | wrapt library |
OpenlitConfig | Singleton configuration management | sdk/python/src/openlit/_config.py |
| Semantic Conventions | Standardized attribute naming | openlit.semcov module |
Sources: sdk/python/src/openlit/instrumentation/claude_agent_sdk/__init__.py:17-21
Python SDK Instrumentation
Instrumentor Base Class
All Python SDK instrumentors extend BaseInstrumentor and implement two required methods:
class ClaudeAgentSDKInstrumentor(BaseInstrumentor):
"""OTel GenAI semantic convention compliant instrumentor for Claude Agent SDK."""
def instrumentation_dependencies(self) -> Collection[str]:
return _instruments # e.g., ("claude-agent-sdk >= 0.1.0",)
def _instrument(self, **kwargs):
# Initialize tracer, config, and wrap functions
Sources: sdk/python/src/openlit/instrumentation/claude_agent_sdk/__init__.py:26-35
Initialization Parameters
When calling openlit.init(), the following parameters are passed to all instrumentors:
| Parameter | Type | Description | Default |
|---|---|---|---|
environment | str | Deployment environment name | "default" |
application_name | str | Application identifier | "default" |
pricing_info | Dict[str, ModelPricing] | Custom model pricing | {} |
capture_message_content | bool | Enable/disable content tracing | True |
disable_metrics | bool | Disable metrics collection | None |
otlp_endpoint | str | OTLP exporter endpoint | Configured endpoint |
Sources: sdk/python/src/openlit/_config.py:20-35
OpenlitConfig Singleton
The OpenlitConfig class manages centralized configuration:
class OpenlitConfig:
"""Singleton configuration class for OpenLIT."""
_instance = None
# Class-level attributes
environment = "default"
application_name = "default"
pricing_info = {}
metrics_dict = {}
otlp_endpoint = None
otlp_headers = None
disable_batch = False
capture_message_content = True
Sources: sdk/python/src/openlit/_config.py:18-42
LlamaIndex Integration
Operation Type Mapping
The LlamaIndex instrumentation uses a semantic convention-based operation mapping system:
graph LR
A[Document Operations] --> B[RETRIEVE]
A --> C[FRAMEWORK]
D[Index Operations] --> C
E[Query Operations] --> B
F[Retriever Operations] --> BSupported Operations
| Operation | Semantic Convention | Category |
|---|---|---|
document_load | RETRIEVE | Document Loading |
document_transform | FRAMEWORK | Document Processing |
document_split | FRAMEWORK | Document Processing |
index_construct | FRAMEWORK | Index Management |
index_insert | FRAMEWORK | Index Management |
query_engine_query | RETRIEVE | Query Engine |
retriever_retrieve | RETRIEVE | Retrieval |
Sources: sdk/python/src/openlit/instrumentation/llamaindex/utils.py:1-30
Helper Functions
Building Tool Definitions
The __helpers.py module provides utilities for extracting tool definitions from chat requests:
def build_tool_definitions(tools):
"""
Extract tool/function definitions from a chat request's ``tools`` parameter.
Supports both OpenAI-style schema and flat schema formats.
"""
Supported formats:
| Format | Structure |
|---|---|
| OpenAI-style | {"type": "function", "function": {...}} |
| Flat (dict) | {"name": ..., "description": ..., "parameters": ...} |
| Flat (object) | Object with name, description, input_schema attributes |
Sources: sdk/python/src/openlit/__helpers.py:1-40
System Instructions Builder
Extracts and formats system instructions from various input formats:
def build_system_instructions(instructions, **kwargs):
"""Builds system instructions from various input formats."""
Guardrails Integration
OpenLIT includes a production-grade guardrails system:
Available Guards
| Guard Class | Purpose |
|---|---|
PII | Detect and redact Personally Identifiable Information |
PromptInjection | Detect prompt injection attacks |
SensitiveTopic | Filter sensitive topics |
TopicRestriction | Restrict to allowed topics |
Moderation | Content moderation |
Schema | Output schema validation |
Custom | Custom guard implementation |
Sources: sdk/python/src/openlit/guard/__init__.py:1-30
Guard Architecture
graph TD
A[User Input] --> B[Pipeline]
B --> C[Guard 1: PII]
C --> D[Guard 2: PromptInjection]
D --> E[Guard N: Custom]
E --> F[GuardResult]
C -.->|Denied| G[GuardDeniedError]
D -.->|Timeout| H[GuardTimeoutError]Usage Example
import openlit
# Initialize with guards
openlit.init(guards=[openlit.PII(action="redact")])
# Or with direct imports
from openlit import PII, PromptInjection, Moderation
guards = [PII(), PromptInjection(), Moderation()]
openlit.init(guards=guards)
TypeScript SDK Instrumentation
Wrapper Pattern
The TypeScript SDK uses a similar wrapping pattern:
// Wrapped in wrapper.ts for each integration
export function wrapOpenAI() {
// Wrap OpenAI SDK methods
}
Sources: sdk/typescript/src/instrumentation/openai/wrapper.ts
Initialization
import openlit from 'openlit';
openlit.init({
otlpEndpoint: "http://127.0.0.1:4318"
});
Configuration Reference
Environment Variables
| Variable | Description | Example |
|---|---|---|
OTEL_EXPORTER_OTLP_ENDPOINT | OTLP endpoint URL | http://127.0.0.1:4318 |
OTEL_EXPORTER_OTLP_HEADERS | Authentication headers | Authorization=Bearer token |
SDK Configuration Options
import openlit
openlit.init(
otlp_endpoint="http://127.0.0.1:4318",
otlp_headers={"Authorization": "Bearer token"},
environment="production",
application_name="my-llm-app",
pricing_info={
"gpt-4": {"input_cost_per_token": 0.00003, "output_cost_per_token": 0.00006}
},
capture_message_content=True
)
Best Practices
1. Instrument Before Usage
Always initialize OpenLIT before importing instrumented SDKs:
# Correct order
import openlit
openlit.init(otlp_endpoint="http://127.0.0.1:4318")
from openai import OpenAI # Now automatically instrumented
2. Custom Pricing
Define custom pricing for accurate cost tracking:
openlit.init(
pricing_info={
"custom-model": {
"input_cost_per_token": 0.00001,
"output_cost_per_token": 0.00002
}
}
)
3. Selective Content Capture
Disable content capture for sensitive data:
openlit.init(
capture_message_content=False # Won't trace message content
)
See Also
Sources: [sdk/python/README.md](https://github.com/openlit/openlit/blob/main/sdk/python/README.md)
OpenLIT Controller
Related topics: GPU Collector
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: GPU Collector
OpenLIT Controller
The OpenLIT Controller is a standalone, lightweight binary agent designed to automatically instrument Python-based LLM applications with OpenLIT's observability SDK. It operates as a background service that runs alongside your application, providing seamless OpenTelemetry-native tracing and metrics collection without requiring code modifications.
Overview
The Controller serves as an autonomous agent that:
- Discovers Python applications running in various environments (bare metal, containers, Kubernetes)
- Injects the OpenLIT Python SDK into target applications at runtime
- Manages the lifecycle of instrumentation (enable, disable, status monitoring)
- Reports service metadata back to the OpenLIT platform
Sources: src/client/src/lib/platform/controller/features/agent.ts:1-60
Architecture
graph TD
A[OpenLIT Platform] -->|Manage & Monitor| B[OpenLIT Controller]
B -->|Discover Services| C[Scanner Module]
B -->|Instrument Apps| D[Engine Module]
D -->|Python SDK Injection| E[Python Runtime]
E -->|Traces & Metrics| F[OpenTelemetry Collector]
G[Kubernetes Pod] -->|Contains| H[Python Application]
H -->|Auto-instrumented by| D
I[Linux Host] -->|Systemd Service| BCore Components
| Component | Location | Responsibility |
|---|---|---|
| cmd/controller | cmd/controller/main.go | Entry point, configuration, signal handling |
| Server | internal/server/handlers.go | HTTP API for platform communication |
| Engine | internal/engine/engine.go | Orchestrates instrumentation operations |
| Lifecycle | internal/engine/lifecycle.go | Manages enable/disable transitions |
| Python SDK Runtime | internal/engine/python_sdk_runtime.go | Runtime injection of Python SDK |
| Scanner | internal/scanner/scanner.go | Discovers Python applications |
Sources: src/client/src/lib/platform/controller/features/agent.ts:1-25
Supported Environments
The Controller supports multiple deployment scenarios:
| Environment | Installation Method | Status |
|---|---|---|
| Linux (systemd) | Direct binary download + systemd service | ✅ Primary |
| Docker | Privileged container with PID host mode | ✅ Supported |
| Kubernetes | DaemonSet or sidecar pattern | ✅ Supported |
Sources: src/client/src/app/(playground)/agents/no-controller.tsx:1-50
Installation
Linux (systemd)
Download the latest binary and configure as a systemd service:
curl -fsSL https://github.com/openlit/openlit/releases/latest/download/openlit-controller-linux-amd64 \
-o /usr/local/bin/openlit-controller
chmod +x /usr/local/bin/openlit-controller
# Create systemd service
cat > /etc/systemd/system/openlit-controller.service << 'EOF'
[Unit]
Description=OpenLIT Controller
After=network.target
[Service]
Environment="OPENLIT_URL=${openlitUrl}"
Environment="OTEL_EXPORTER_OTLP_ENDPOINT=${openlitUrl.replace(/:\d+$/, ":4318")}"
Environment="OPENLIT_API_KEY=${apiKey}"
ExecStart=/usr/local/bin/openlit-controller
Restart=always
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable --now openlit-controller
Sources: src/client/src/app/(playground)/agents/no-controller.tsx:10-35
Docker
docker run -d --privileged --pid=host \
-e OPENLIT_URL=http://openlit:3000 \
-e OTEL_EXPORTER_OTLP_ENDPOINT=http://openlit:4318 \
openlit-controller
Configuration
The Controller is configured via environment variables:
| Environment Variable | Description | Required |
|---|---|---|
OPENLIT_URL | URL of the OpenLIT platform | Yes |
OPENLIT_API_KEY | API key for authentication | No |
OTEL_EXPORTER_OTLP_ENDPOINT | OTLP endpoint for telemetry | Yes |
Sources: src/client/src/app/(playground)/agents/no-controller.tsx:15-25
Agent Operations
The Controller exposes three primary operations:
Enable Instrumentation
Activates OpenLIT SDK injection for target Python applications.
{
"operation": "enable",
"serviceId": "string"
}
Disable Instrumentation
Deactivates SDK injection and removes runtime hooks.
{
"operation": "disable",
"serviceId": "string"
}
Status Check
Retrieves current instrumentation state for a service.
{
"operation": "status",
"serviceId": "string"
}
Sources: src/client/src/lib/platform/controller/features/agent.ts:25-45
Service State Model
stateDiagram-v2
[*] --> disabled: Initial State
disabled --> enabled: enable operation
enabled --> disabled: disable operation
enabled --> manual: explicit override
manual --> enabled: resume auto
disabled --> manual: partial config
manual --> disabled: full removalState Definitions
| State | Description |
|---|---|
enabled | SDK actively injecting traces |
disabled | No instrumentation active |
manual | User-controlled state (not auto-managed) |
automatable | Service eligible for auto-instrumentation |
Sources: src/client/src/lib/platform/controller/features/agent.ts:15-30
Python SDK Runtime Integration
The Controller's Python SDK Runtime module handles the actual SDK injection:
- Process Discovery: Identifies Python processes running user applications
- Runtime Injection: Injects OpenLIT SDK using Python's import hooks
- Configuration Propagation: Sets OTLP endpoint and API keys via environment
- Health Monitoring: Ensures instrumentation remains active
The runtime is specifically optimized for Python-only services:
supported: service.language_runtime === "python"
Sources: src/client/src/lib/platform/controller/features/agent.ts:20
Kubernetes Integration
When running in Kubernetes, the Controller respects workload metadata:
| Attribute | Description |
|---|---|
k8s.workload.kind | Workload type (Deployment, StatefulSet, etc.) |
service.service_name | Name of the service |
service.namespace | Kubernetes namespace |
Naked Pod Handling
The Controller automatically detects and handles "naked pods" (pods without a workload controller):
const isNakedPod = mode === "kubernetes" && (!workloadKind || workloadKind === "Pod");
Sources: src/client/src/lib/platform/controller/features/agent.ts:8-12
Validation
Operations are validated before execution:
validatePayload(operation: string, _payload: Record<string, unknown>) {
if (
operation !== "enable" &&
operation !== "disable" &&
operation !== "status"
) {
return `Unknown operation "${operation}" for feature "${FEATURE}".
Expected "enable", "disable", or "status".`;
}
return null;
}
Sources: src/client/src/lib/platform/controller/features/agent.ts:28-40
Summary
The OpenLIT Controller is a critical component for zero-code instrumentation of Python LLM applications. It provides:
- Automated Discovery: Scans and identifies Python services automatically
- Runtime Injection: Injects observability SDK without application restarts
- Multi-Platform Support: Works on Linux, Docker, and Kubernetes
- Platform Integration: Connects to OpenLIT platform for centralized management
- Lifecycle Management: Full control over enable/disable operations
Sources: [src/client/src/lib/platform/controller/features/agent.ts:1-60]()
GPU Collector
Related topics: OpenLIT Controller, System Architecture
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: OpenLIT Controller, System Architecture
GPU Collector
The OpenTelemetry GPU Collector (also referred to as opentelemetry-gpu-collector) is a specialized telemetry agent built and maintained by OpenLIT. It provides real-time GPU hardware telemetry collection for NVIDIA, AMD, and Intel GPUs, emitting metrics in compliance with the OpenTelemetry semantic conventions under the hw.gpu.* namespace.
Overview
The GPU Collector serves as a standalone service that monitors GPU hardware metrics and exports them via the OTLP protocol to any OpenTelemetry-compatible backend, including the OpenLIT observability platform.
Key Responsibilities:
- Collect GPU hardware telemetry from NVIDIA GPUs via NVML (NVIDIA Management Library)
- Collect GPU hardware telemetry from AMD and Intel GPUs via
sysfs/hwmoninterfaces - Perform eBPF-based CUDA kernel tracing for detailed operation insights
- Emit metrics following OpenTelemetry semantic conventions (
hw.gpu.*) - Export metrics over OTLP for integration with observability platforms
License: Apache-2.0
Sources: [opentelemetry-gpu-collector/README.md](https://github.com/openlit/openlit/blob/main/opentelemetry-gpu-collector/README.md)
Doramagic Pitfall Log
Source-linked risks stay visible on the manual page so the preview does not read like a recommendation.
First-time setup may fail or require extra isolation and rollback planning.
First-time setup may fail or require extra isolation and rollback planning.
First-time setup may fail or require extra isolation and rollback planning.
First-time setup may fail or require extra isolation and rollback planning.
Doramagic Pitfall Log
Doramagic extracted 15 source-linked risk signals. Review them before installing or handing real data to the project.
1. Installation risk: Integration: Governance and compliance signals for LLM observability
- Severity: medium
- Finding: Installation risk is backed by a source signal: Integration: Governance and compliance signals for LLM observability. Treat it as a review item until the current version is checked.
- User impact: First-time setup may fail or require extra isolation and rollback planning.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/openlit/openlit/issues/1106
2. Installation risk: Proposal: gen_ai.agent.threat_detected span event helper for OTel-shaped detection observability
- Severity: medium
- Finding: Installation risk is backed by a source signal: Proposal: gen_ai.agent.threat_detected span event helper for OTel-shaped detection observability. Treat it as a review item until the current version is checked.
- User impact: First-time setup may fail or require extra isolation and rollback planning.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/openlit/openlit/issues/1186
3. Installation risk: [Bug]: Docker Image doesn't run on windows 64bit
- Severity: medium
- Finding: Installation risk is backed by a source signal: [Bug]: Docker Image doesn't run on windows 64bit. Treat it as a review item until the current version is checked.
- User impact: First-time setup may fail or require extra isolation and rollback planning.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/openlit/openlit/issues/786
4. Installation risk: openlit-1.19.0
- Severity: medium
- Finding: Installation risk is backed by a source signal: openlit-1.19.0. Treat it as a review item until the current version is checked.
- User impact: First-time setup may fail or require extra isolation and rollback planning.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/openlit/openlit/releases/tag/openlit-1.19.0
5. Configuration risk: controller-0.2.0
- Severity: medium
- Finding: Configuration risk is backed by a source signal: controller-0.2.0. Treat it as a review item until the current version is checked.
- User impact: Users may get misleading failures or incomplete behavior unless configuration is checked carefully.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/openlit/openlit/releases/tag/controller-0.2.0
6. Configuration risk: openlit-1.20.0
- Severity: medium
- Finding: Configuration risk is backed by a source signal: openlit-1.20.0. Treat it as a review item until the current version is checked.
- User impact: Users may get misleading failures or incomplete behavior unless configuration is checked carefully.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/openlit/openlit/releases/tag/openlit-1.20.0
7. Capability assumption: README/documentation is current enough for a first validation pass.
- Severity: medium
- Finding: README/documentation is current enough for a first validation pass.
- User impact: The project should not be treated as fully validated until this signal is reviewed.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: capability.assumptions | github_repo:747319327 | https://github.com/openlit/openlit | README/documentation is current enough for a first validation pass.
8. Maintenance risk: Maintainer activity is unknown
- Severity: medium
- Finding: Maintenance risk is backed by a source signal: Maintainer activity is unknown. Treat it as a review item until the current version is checked.
- User impact: Users cannot judge support quality until recent activity, releases, and issue response are checked.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: evidence.maintainer_signals | github_repo:747319327 | https://github.com/openlit/openlit | last_activity_observed missing
9. Security or permission risk: no_demo
- Severity: medium
- Finding: no_demo
- User impact: The project may affect permissions, credentials, data exposure, or host boundaries.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: downstream_validation.risk_items | github_repo:747319327 | https://github.com/openlit/openlit | no_demo; severity=medium
10. Security or permission risk: no_demo
- Severity: medium
- Finding: no_demo
- User impact: The project may affect permissions, credentials, data exposure, or host boundaries.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: risks.scoring_risks | github_repo:747319327 | https://github.com/openlit/openlit | no_demo; severity=medium
11. Security or permission risk: Bug: OpenAI API key in operator example test-application is not using OPENAI_API_KEY env var
- Severity: medium
- Finding: Security or permission risk is backed by a source signal: Bug: OpenAI API key in operator example test-application is not using OPENAI_API_KEY env var. Treat it as a review item until the current version is checked.
- User impact: The project may affect permissions, credentials, data exposure, or host boundaries.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/openlit/openlit/issues/1135
12. Security or permission risk: openlit-1.19.1
- Severity: medium
- Finding: Security or permission risk is backed by a source signal: openlit-1.19.1. Treat it as a review item until the current version is checked.
- User impact: The project may affect permissions, credentials, data exposure, or host boundaries.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/openlit/openlit/releases/tag/openlit-1.19.1
Source: Doramagic discovery, validation, and Project Pack records
Community Discussion Evidence
These external discussion links are review inputs, not standalone proof that the project is production-ready.
Count of project-level external discussion links exposed on this manual page.
Open the linked issues or discussions before treating the pack as ready for your environment.
Community Discussion Evidence
Doramagic exposes project-level community discussion separately from official documentation. Review these links before using openlit with real data or production workflows.
- Proposal: gen_ai.agent.threat_detected span event helper for OTel-shaped - github / github_issue
- [[Bug]: Docker Image doesn't run on windows 64bit](https://github.com/openlit/openlit/issues/786) - github / github_issue
- Bug: OpenAI API key in operator example test-application is not using OP - github / github_issue
- Integration: Governance and compliance signals for LLM observability - github / github_issue
- openlit-1.20.0 - github / github_release
- controller-0.2.0 - github / github_release
- openlit-1.19.1 - github / github_release
- controller-0.1.0 - github / github_release
- openlit-1.19.0 - github / github_release
- py-1.41.2 - github / github_release
- README/documentation is current enough for a first validation pass. - GitHub / issue
Source: Project Pack community evidence and pitfall evidence