Doramagic Project Pack · Human Manual
sim
Sim Studio provides a modern approach to workflow automation by integrating large language models (LLMs) directly into the automation pipeline. The platform supports both cloud-hosted and ...
Project Introduction
Related topics: Technology Stack, Architecture Overview
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Technology Stack, Architecture Overview
Project Introduction
Sim is an AI-powered workflow automation platform that enables users to build, deploy, and manage intelligent automation pipelines. The platform combines visual workflow design with AI capabilities, allowing teams to create sophisticated automation workflows without extensive coding knowledge.
Overview
Sim Studio provides a modern approach to workflow automation by integrating large language models (LLMs) directly into the automation pipeline. The platform supports both cloud-hosted and self-hosted deployment options, giving organizations flexibility in how they manage their automation infrastructure.
The project is structured as a monorepo containing multiple packages:
| Package | Purpose |
|---|---|
apps/sim | Main web application |
packages/python-sdk | Python SDK for programmatic access |
packages/ts-sdk | TypeScript SDK for programmatic access |
scripts | Automation and utility scripts |
Sources: README.md:1-20
Key Features
AI-Native Automation
Sim leverages AI capabilities throughout the platform, enabling intelligent decision-making within workflows. The system supports integration with various AI providers including Ollama and vLLM for local model deployment.
Sources: README.md:45-48
Multiple SDK Support
The platform provides official SDKs for both Python and TypeScript ecosystems, enabling developers to:
- Execute workflows programmatically
- Manage workflow deployments
- Monitor execution status and results
- Handle async job execution with polling
Sources: packages/python-sdk/README.md:1-50 Sources: packages/ts-sdk/README.md:1-40
Extensible Architecture
Sim includes support for various webhook providers and integrations:
| Provider | Integration Type |
|---|---|
| Webflow | CMS Webhook |
| Typeform | Form Response |
| Gong | Call Recording |
| Vercel | Deployment Events |
| Ashby | ATS Events |
| Grain | Meeting Recording |
| Salesforce | CRM Events |
Sources: apps/sim/lib/webhooks/providers/webflow.ts:1-50 Sources: apps/sim/lib/webhooks/providers/typeform.ts:1-40
Architecture Overview
graph TD
A[Client Application] --> B[Next.js Web App]
B --> C[Workflow Engine]
C --> D[Sandbox Executor]
C --> E[Webhook System]
D --> F[AI Providers]
E --> G[External Services]
F --> H[Ollama / vLLM]
F --> I[Cloud LLM APIs]Core Components
#### Web Application (apps/sim)
The main React-based web application built with Next.js that provides:
- Visual workflow editor
- Block-based workflow construction
- Trigger configuration
- Execution monitoring
- Workspace management
The application uses TypeScript with strict type checking enabled via tsc --noEmit.
Sources: apps/sim/package.json:1-30
#### Workflow Engine
The workflow engine handles:
- Workflow parsing and validation
- Execution scheduling
- State management
- Error handling and retries
#### Sandbox Executor
Sandboxed execution environment for running workflow blocks safely with resource isolation.
Sources: apps/sim/package.json:8-12
Deployment Options
Sim supports three primary self-hosted deployment methods.
Comparison Matrix
| Method | Docker Required | Manual Setup | Use Case |
|---|---|---|---|
| NPM Package | Yes | Minimal | Quick local testing |
| Docker Compose | Yes | Moderate | Production deployments |
| Manual Setup | No | Extensive | Custom infrastructure |
Sources: README.md:25-50
Option 1: NPM Package (Quick Start)
The fastest way to get started locally:
npx simstudio
This command pulls the latest Docker images and starts Sim at http://localhost:3000.
Options:
| Flag | Description | Default |
|---|---|---|
-p, --port <port> | Port to run Sim on | 3000 |
--no-pull | Skip pulling latest Docker images | false |
Sources: README.md:25-32
Option 2: Docker Compose
For production-ready deployments with persistent storage:
git clone https://github.com/simstudioai/sim.git && cd sim
docker compose -f docker-compose.prod.yml up -d
Sources: README.md:34-38
Option 3: Manual Setup
For custom infrastructure configurations. Requires manual installation of all dependencies.
Sources: README.md:40-45
System Requirements
Hardware Requirements
| Component | Minimum | Recommended |
|---|---|---|
| CPU | 2 cores | 4+ cores |
| RAM | 4 GB | 8+ GB |
| Disk | 10 GB | 20+ GB |
Software Requirements
| Software | Version | Notes |
|---|---|---|
| Docker | Latest | Required for NPM and Docker Compose methods |
| Bun | Latest | Required for manual setup |
| Node.js | v20+ | Required for manual setup |
| PostgreSQL | 12+ | Must include pgvector extension |
Sources: README.md:40-45
Database Configuration
PostgreSQL with pgvector is required for vector storage capabilities:
docker run --name simstudio-db \
-e POSTGRES_PASSWORD=your_password \
-e POSTGRES_DB=simstudio \
-p 5432:5432 -d \
pgvector/pgvector:pg16
Sources: README.md:50-55
SDK Data Structures
WorkflowExecutionResult
@dataclass
class WorkflowExecutionResult:
success: bool
output: Optional[Any] = None
error: Optional[str] = None
logs: Optional[list] = None
metadata: Optional[Dict[str, Any]] = None
trace_spans: Optional[list] = None
total_duration: Optional[float] = None
Sources: packages/python-sdk/README.md:80-90
WorkflowStatus
@dataclass
class WorkflowStatus:
is_deployed: bool
deployed_at: Optional[str] = None
needs_redeployment: bool = False
Sources: packages/python-sdk/README.md:100-105
RateLimitInfo
@dataclass
class RateLimitInfo:
limit: int
remaining: int
reset: int
retry_after: Optional[int] = None
Sources: packages/python-sdk/README.md:125-130
Development Workflow
Code Quality Tools
The project enforces code quality through Biome:
| Command | Purpose |
|---|---|
bun run lint | Format and lint with auto-fix |
bun run lint:check | Check linting without auto-fix |
bun run format | Format code files |
bun run format:check | Check formatting without changes |
bun run type-check | Run TypeScript type checking |
Sources: apps/sim/package.json:15-22
Testing
Tests are run using Vitest:
| Command | Purpose |
|---|---|
bun run test | Run tests once |
bun run test:watch | Run tests in watch mode |
bun run test:coverage | Generate coverage report |
Sources: apps/sim/package.json:14-17
Local Model Support
Sim supports self-hosted AI models through two providers:
Ollama
Integration with Ollama for running local LLMs including Llama 2, Mistral, and other open-source models.
vLLM
Integration with vLLM for high-performance inference serving.
Sources: README.md:45-48
Documentation Generation
The project includes automated documentation generation:
bun run generate-docs
This script preserves manual content markers within the codebase for custom documentation sections.
Sources: scripts/README.md:1-30
Community and Support
| Resource | Link |
|---|---|
| Documentation | https://docs.sim.ai |
| Discord Community | Discord |
| @simdotai | |
| DeepWiki | DeepWiki |
Sources: README.md:1-15
License
The project is licensed under Apache-2.0.
Sources: packages/python-sdk/README.md:70 Sources: packages/ts-sdk/README.md:45
Sources: [README.md:1-20]()
Technology Stack
Related topics: Project Introduction, Deployment Guide
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Project Introduction, Deployment Guide
Technology Stack
Overview
The Sim platform is built on a modern, polyglot technology stack designed to support both frontend and backend development with a focus on developer productivity, type safety, and scalable deployment options. The system leverages TypeScript as the primary language for the core application, Python for the SDK ecosystem, and Docker for containerization and self-hosted deployments.
Core Runtime Environment
Bun
Bun serves as the primary package manager and runtime for the project. All build scripts, dependency installations, and development workflows are configured to use Bun workspaces for efficient monorepo management.
bun install
bun run build
bun run test
Sources: apps/sim/package.json:11-30
Node.js Requirements
The main application requires Node.js v20+ for runtime compatibility. The TypeScript SDK specifies Node.js 18+ as the minimum requirement.
| Component | Minimum Version | Recommended Version |
|---|---|---|
| Main App (apps/sim) | Node.js v20+ | Latest LTS |
| TypeScript SDK | Node.js 18+ | Node.js 20+ |
| Python SDK | Python 3.8+ | Python 3.11+ |
Sources: README.md:35-45, packages/ts-sdk/README.md:42
Frontend Architecture
Next.js Framework
The main Sim application is built on Next.js, providing server-side rendering, API routes, and static generation capabilities.
Build Configuration:
{
"build": "bun run build:sandbox-bundles && NODE_OPTIONS='--max-old-space-size=8192' next build",
"start": "next start"
}
Sources: apps/sim/package.json:15-16
Testing Framework
| Tool | Purpose | Command |
|---|---|---|
| Vitest | Unit and integration testing | bun run test |
| Vitest (watch mode) | Development testing | bun run test:watch |
| Vitest (coverage) | Coverage reports | bun run test:coverage |
Sources: apps/sim/package.json:19-21
Code Quality Tools
| Tool | Purpose | Commands |
|---|---|---|
| Biome | Linting and formatting | lint, lint:check, format, format:check |
| TypeScript Compiler | Type checking | type-check |
Biome is configured for both linting (with unsafe auto-fixes) and code formatting:
bun run lint # Apply lint fixes
bun run lint:check # Check only
bun run format # Apply formatting
bun run format:check
Sources: apps/sim/package.json:22-25
Backend and Database
PostgreSQL with pgvector
The platform requires PostgreSQL 12+ with the pgvector extension for vector similarity search capabilities. This enables knowledge base and document embedding features.
Docker Setup:
docker run --name simstudio-db \
-e POSTGRES_PASSWORD=your_password \
-e POSTGRES_DB=simstudio \
-p 5432:5432 \
-d pgvector/pgvector:pg16
Sources: README.md:45-50
Drizzle ORM
Database operations are managed through Drizzle ORM, configured via drizzle.config.ts. This provides type-safe database queries and migrations.
import { defineConfig } from 'drizzle-kit'
Sources: packages/db/drizzle.config.ts
Document Processing
OCR Integration
The document processor integrates with multiple OCR providers for extracting content from PDFs and images:
| Provider | Configuration | Timeout |
|---|---|---|
| Mistral OCR API | API Key + Endpoint | 30 seconds |
| Azure Mistral OCR | API Key + Endpoint + Model | 30 seconds |
Sources: apps/sim/lib/knowledge/documents/document-processor.ts:1-80
The OCR system uses:
- Native
fetchAPI for HTTP requests - AbortController for timeout management
- Base64 encoding for file uploads
SDK Ecosystem
TypeScript SDK
The TypeScript SDK (packages/ts-sdk) provides programmatic access to Sim features:
| Requirement | Version |
|---|---|
| Node.js | 18+ |
| TypeScript | 5.0+ |
Development Commands:
bun run test # Run tests
bun run build # Compile to dist/
bun run dev # Development mode with auto-rebuild
Sources: packages/ts-sdk/README.md:1-45
Python SDK
The Python SDK (packages/python-sdk) offers Python integration:
| Requirement | Version |
|---|---|
| Python | 3.8+ |
| requests | >= 2.25.0 |
Code Quality Tools:
black simstudio/ # Code formatting
flake8 simstudio/ # Linting
mypy simstudio/ # Type checking
isort simstudio/ # Import sorting
Sources: packages/python-sdk/README.md:1-50
Security Infrastructure
Input Validation
The platform implements comprehensive input validation for security:
- Enum Validation: Validates values against allowed lists
- Hostname Validation: Prevents SSRF attacks by checking for private IPs, localhost, and reserved addresses
- Proxy URL Validation: Secure proxy configuration validation
Sources: apps/sim/lib/core/security/input-validation.ts:1-100
Deployment Options
Docker Containerization
Sim supports multiple deployment scenarios:
graph TD
A[Sim Deployment Options] --> B[Docker (NPM Package)]
A --> C[Docker Compose]
A --> D[Manual Setup]
B --> B1[npx simstudio]
C --> C1[docker compose up]
D --> D1[Bun + PostgreSQL]Sources: README.md:25-50
Local Model Support
The platform supports self-hosted AI models through:
| Runtime | Description |
|---|---|
| Ollama | Local model inference |
| vLLM | High-performance LLM serving |
Realtime Application
The apps/realtime package provides WebSocket-based communication features with its own independent package.json configuration.
Sources: apps/realtime/package.json
Load Testing Infrastructure
The project includes Artillery-based load testing for workflow performance validation:
| Script | Purpose |
|---|---|
load:workflow:waves | Wave-based load testing |
load:workflow:isolation | Workspace isolation testing |
Configuration Options:
| Environment Variable | Default | Description |
|---|---|---|
WAVE_ONE_DURATION | 60 | Wave 1 duration in seconds |
WAVE_ONE_RATE | 10 | Wave 1 request rate |
WORKSPACE_A_WEIGHT | 8 | Workspace A load weight |
WORKSPACE_B_WEIGHT | 1 | Workspace B load weight |
Sources: apps/sim/package.json:8-14
Webhook Integrations
The platform provides webhook providers for third-party integrations:
| Provider | Purpose |
|---|---|
| Gong | Meeting/call automation |
| Vercel | Deployment events |
| Typeform | Form responses |
| Webflow | CMS events |
| Messaging events |
Each provider implements signature verification for security:
verifyAuth: createHmacVerifier({
configKey: 'secret',
headerName: 'Provider-Signature',
validateFn: validateProviderSignature,
providerLabel: 'ProviderName',
})
Sources: apps/sim/lib/webhooks/providers/gong.ts, apps/sim/lib/webhooks/providers/vercel.ts, apps/sim/lib/webhooks/providers/typeform.ts, apps/sim/lib/webhooks/providers/webflow.ts, apps/sim/lib/webhooks/providers/whatsapp.ts
Architecture Diagram
graph TB
subgraph "Client Layer"
WebApp[Web Application<br/>Next.js]
TS_SDK[TypeScript SDK<br/>Node.js 18+]
end
subgraph "Runtime"
Bun[Bun Runtime<br/>Workspaces]
Node[Node.js v20+]
end
subgraph "Backend Services"
API[API Routes]
Webhooks[Webhook Providers]
Security[Input Validation]
end
subgraph "Data Layer"
Postgres[PostgreSQL + pgvector<br/>Drizzle ORM]
Knowledge[Document Processor<br/>OCR Integration]
end
subgraph "Deployment"
Docker[Docker Container]
Ollama[Ollama]
VLLM[vLLM]
end
WebApp --> API
TS_SDK --> API
API --> Postgres
API --> Knowledge
WebApp --> Webhooks
Security --> API
Docker --> Postgres
Ollama --> API
VLLM --> APISummary Table
| Category | Technology | Version/Notes |
|---|---|---|
| Runtime | Bun | Workspaces for monorepo |
| Runtime | Node.js | v20+ for main app |
| Runtime | Python | 3.8+ for Python SDK |
| Framework | Next.js | Full-stack React framework |
| Database | PostgreSQL | 12+ with pgvector |
| ORM | Drizzle | Type-safe queries |
| Testing | Vitest | Unit and integration tests |
| Linting | Biome | Fast JS/TS linter |
| OCR | Mistral/Azure | Document processing |
| SDKs | TypeScript/Python | Multi-language support |
| Deployment | Docker | Self-hosted option |
| AI Runtime | Ollama/vLLM | Local model support |
Sources: [apps/sim/package.json:11-30]()
Architecture Overview
Related topics: Workflow Executor Engine, Workflow Blocks System
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Workflow Executor Engine, Workflow Blocks System
Architecture Overview
Sim is an open-source platform for building AI agents and orchestrating agentic workflows. It connects over 1,000 integrations and LLMs to enable sophisticated automation scenarios. The platform is built with a modular architecture centered around blocks, workflows, triggers, and an execution engine.
High-Level Architecture
The Sim platform follows a layered architecture:
graph TD
subgraph "Presentation Layer"
UI[Next.js Application]
end
subgraph "API Layer"
API[API Routes]
Contracts[Contract Types]
end
subgraph "Workflow Engine"
WE[Workflow Engine]
Diff[Diff Engine]
Registry[Block Registry]
end
subgraph "Execution Layer"
Executor[Executor]
Sandbox[Sandbox Runner]
end
subgraph "Integration Layer"
Webhooks[Webhook Providers]
Triggers[Trigger System]
Tools[Tool System]
end
UI --> API
API --> Contracts
Contracts --> WE
WE --> Registry
WE --> Diff
Registry --> Executor
Executor --> Sandbox
Webhooks --> Triggers
Triggers --> WECore Components
Block Registry System
The Block Registry is the central component that manages all available blocks in the system. Blocks are the fundamental building units of workflows, representing discrete operations like data transformation, API calls, or AI interactions.
Key Files:
apps/sim/blocks/registry.ts- Block registration and retrievalapps/sim/blocks/index.ts- Block exports and public API
Registry Functions:
| Function | Purpose |
|---|---|
getBlock(type) | Retrieve a specific block by type identifier |
getAllBlocks() | Get all registered blocks |
getAllBlockTypes() | Get list of all block type identifiers |
getBlockByToolName(name) | Find block by associated tool name |
getBlocksByCategory(category) | Filter blocks by category |
isValidBlockType(type) | Validate if a block type exists |
registry | The underlying registry data structure |
Sources: apps/sim/blocks/index.ts
Block Configuration Interface:
The registry is mocked in tests to return null or empty values for blocks, indicating dynamic resolution at runtime:
vi.mock('@/blocks', () => ({
getBlock: () => null,
getAllBlocks: () => ({}),
getAllBlockTypes: () => [],
getBlockByToolName: () => null,
getBlocksByCategory: () => [],
isValidBlockType: () => false,
registry: {},
}))
Sources: apps/sim/lib/workflows/diff/diff-engine.test.ts:1-20
Trigger System
The Trigger System manages how workflows are initiated. Triggers can be manual, API-based, chat-based, or event-driven.
Trigger Types:
| Type | Identifier | Description |
|---|---|---|
| Manual | TRIGGER_TYPES.START | User-initiated workflow start |
| API | TRIGGER_TYPES.API | Programmatic workflow invocation |
| Chat | TRIGGER_TYPES.CHAT | Chat-triggered workflows |
| Starter | TRIGGER_TYPES.STARTER | Legacy starter block support |
Sources: apps/sim/lib/workflows/triggers/triggers.ts
Trigger Reference Alias Map:
The system maps reference aliases to concrete trigger block types:
export const TRIGGER_REFERENCE_ALIAS_MAP = {
start: TRIGGER_TYPES.START,
api: TRIGGER_TYPES.API,
chat: TRIGGER_TYPES.CHAT,
manual: TRIGGER_TYPES.START,
} as const
TriggerUtils Class:
The TriggerUtils class provides static methods for trigger identification:
export class TriggerUtils {
static isTriggerBlock(block: { type: string; triggerMode?: boolean }): boolean {
const blockConfig = getBlock(block.type)
return (
blockConfig?.category === 'triggers' ||
block.triggerMode === true ||
block.type === TRIGGER_TYPES.STARTER
)
}
static isTriggerType(block: { type: string }, triggerType: TriggerType): boolean {
return block.type === triggerType
}
}
Sources: apps/sim/lib/workflows/triggers/triggers.ts
Workflow Engine
The Workflow Engine orchestrates the execution of blocks within a workflow context.
Workflow Components:
| Component | File | Purpose |
|---|---|---|
| Diff Engine | lib/workflows/diff/diff-engine.test.ts | Computes differences between workflow versions |
| Block Outputs | lib/workflows/blocks/block-outputs | Manages output data flow between blocks |
| Visibility | lib/workflows/subblocks/visibility | Controls block visibility and canonical modes |
| Triggers | lib/workflows/triggers/triggers.ts | Workflow initiation logic |
Sources: apps/sim/lib/workflows/diff/diff-engine.test.ts
Workflow Registry Store:
The engine integrates with a workflow registry store for state management:
vi.mock('@/stores/workflows/registry/store', () => ({
useWorkflowRegistry: {
getState: () => ({
activeWorkflowId: null,
}),
},
}))
Execution Engine
The Execution Engine is responsible for running workflows and blocks in a sandboxed environment.
Execution Constants:
| Constant | Purpose |
|---|---|
BLOCK_DIMENSIONS | Defines minimum block height and dimensions |
HANDLE_POSITIONS | Manages connection handle placement |
isAnnotationOnlyBlock() | Determines if a block is annotation-only |
vi.mock('@/executor/constants', () => ({
isAnnotationOnlyBlock: () => false,
BLOCK_DIMENSIONS: { MIN_HEIGHT: 100 },
HANDLE_POSITIONS: {},
}))
Sources: apps/sim/lib/workflows/diff/diff-engine.test.ts:1-20
API Contract System
The API Contract System provides type-safe API route definitions and type inference utilities.
Contract Type Generics:
| Type | Description |
|---|---|
ContractParams<C> | Extracts URL parameters from contract |
ContractQuery<C> | Extracts query parameters from contract |
ContractBody<C> | Extracts request body from contract |
ContractHeaders<C> | Extracts headers from contract |
export type ContractParams<C extends AnyApiRouteContract> = C extends ApiRouteContract<
infer TParams,
ApiSchema | undefined,
ApiSchema | undefined,
ApiSchema | undefined,
ResponseMode,
ApiSchema | undefined
>
? EmptySchemaOutput<TParams>
: undefined
Sources: apps/sim/lib/api/contracts/types.ts
Webhook Integration
Sim supports multiple webhook providers for event-driven workflow triggering.
Supported Providers:
| Provider | File | Key Features |
|---|---|---|
| Gong | lib/webhooks/providers/gong.ts | Automation rules, call data |
| Webflow | lib/webhooks/providers/webflow.ts | Collection filtering, CMS events |
| Typeform | lib/webhooks/providers/typeform.ts | Form responses, HMAC verification |
| Vercel | lib/webhooks/providers/vercel.ts | Deployment events |
Gong Provider Structure:
{
eventType: 'gong.automation_rule',
callId,
metaData,
parties: (callData?.parties as unknown[]) || [],
context: (callData?.context as unknown[]) || [],
trackers: (content?.trackers as unknown[]) || [],
topics: (content?.topics as unknown[]) || [],
highlights: (content?.highlights as unknown[]) || [],
}
Sources: apps/sim/lib/webhooks/providers/gong.ts
Webhook Event Filtering:
Providers implement event filtering logic:
shouldSkipEvent({ webhook, body, requestId, providerConfig }: EventFilterContext) {
const configuredCollectionId = providerConfig.collectionId as string | undefined
if (configuredCollectionId) {
const obj = body as Record<string, unknown>
const payload = obj.payload as Record<string, unknown> | undefined
const payloadCollectionId = (payload?.collectionId ?? obj.collectionId) as string | undefined
if (payloadCollectionId && payloadCollectionId !== configuredCollectionId) {
return true
}
}
return false
}
Sources: apps/sim/lib/webhooks/providers/webflow.ts
Pending Verification System:
Webhook verification is handled through a pending verification mechanism:
| Provider | Verification Method |
|---|---|
| Ashby | Always valid |
| Grain | GET/HEAD or POST without body |
| Generic | GET/HEAD or POST without body |
| Salesforce | GET/HEAD or POST without body |
const pendingWebhookVerificationProbeMatchers: Record<
string,
PendingWebhookVerificationProbeMatcher
> = {
ashby: ({ method, body }) => method === 'POST' && body?.action === 'ping',
grain: ({ method, body }) =>
method === 'GET' ||
method === 'HEAD' ||
(method === 'POST' && (!body || Object.keys(body).length === 0 || !body.type)),
generic: ({ method, body }) =>
method === 'GET' ||
method === 'HEAD' ||
(method === 'POST' && (!body || Object.keys(body).length === 0)),
salesforce: ({ method, body }) =>
method === 'GET' ||
method === 'HEAD' ||
(method === 'POST' && (!body || Object.keys(body).length === 0)),
}
Sources: apps/sim/lib/webhooks/pending-verification.ts
Data Flow
sequenceDiagram
participant User
participant API
participant Registry
participant Workflow
participant Executor
participant Sandbox
User->>API: Trigger Workflow
API->>Registry: Validate Block Types
Registry-->>API: Block Configs
API->>Workflow: Initialize Workflow
Workflow->>Registry: Get Block Implementations
Registry-->>Workflow: Blocks
Workflow->>Executor: Execute Blocks
Executor->>Sandbox: Run in Sandbox
Sandbox-->>Executor: Results
Executor-->>Workflow: Block Outputs
Workflow-->>API: Workflow Complete
API-->>User: ResponseBlock Execution Flow
graph TD
Start[Workflow Start] --> Trigger{Trigger Type}
Trigger -->|Manual| Manual[Manual Trigger]
Trigger -->|API| API[API Trigger]
Trigger -->|Chat| Chat[Chat Trigger]
Manual --> Validate{Validate Block Types}
API --> Validate
Chat --> Validate
Validate -->|Valid| GetBlocks[Get Blocks from Registry]
Validate -->|Invalid| Error[Error Handling]
GetBlocks --> Execute[Execute Block]
Execute --> Sandbox{Run in Sandbox?}
Sandbox -->|Yes| Sandboxed[Sandbox Execution]
Sandbox -->|No| Direct[Direct Execution]
Sandboxed --> Output[Block Output]
Direct --> Output
Output --> NextBlock{Next Block?}
NextBlock -->|Yes| Execute
NextBlock -->|No| Complete[Workflow Complete]Type Safety
Sim leverages TypeScript's type system extensively for compile-time safety:
- API Contracts: Type-safe route definitions with generic type parameters
- Block Registry: Type-checked block retrieval and validation
- Trigger Classification: Type-safe trigger type checking
- Webhook Payloads: Typed webhook event data structures
Documentation Generation
The platform includes an automated documentation generator:
graph LR
A[Block Files] --> B[Scan Directory]
B --> C[Extract Metadata]
C --> D[Generate Markdown]
D --> E[Update meta.json]
E --> F[Commit to Repo]The generator is integrated into CI/CD and preserves manual content during regeneration.
Sources: scripts/README.md
Sources: [apps/sim/blocks/index.ts](https://github.com/simstudioai/sim/blob/main/apps/sim/blocks/index.ts)
Workflow Executor Engine
Related topics: Architecture Overview, Workflow Blocks System
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Architecture Overview, Workflow Blocks System
Workflow Executor Engine
The Workflow Executor Engine is the core runtime system responsible for executing workflows built in the Sim platform. It transforms serialized workflow definitions into executable execution plans, manages block-level execution with proper dependency resolution, and orchestrates complex control flow patterns including parallel execution and looping constructs.
Architecture Overview
The executor engine follows a layered architecture that separates concerns between DAG construction, execution planning, and runtime orchestration.
graph TD
A[Workflow Definition] --> B[DAG Builder]
B --> C[Execution Plan]
C --> D[Execution Engine]
D --> E[Block Executor]
E --> F[Parallel Orchestrator]
E --> G[Loop Orchestrator]
D --> H[Trigger System]
H --> I[Manual Triggers]
H --> J[API Triggers]
H --> K[Scheduled Triggers]Core Components
| Component | File | Responsibility |
|---|---|---|
| DAG Builder | executor/dag/builder.ts | Converts workflow blocks into a directed acyclic graph |
| Execution Engine | executor/execution/engine.ts | Coordinates overall execution flow and state management |
| Block Executor | executor/execution/executor.ts | Executes individual blocks and manages block states |
| Parallel Orchestrator | executor/orchestrators/parallel.ts | Manages concurrent block execution |
| Loop Orchestrator | executor/orchestrators/loop.ts | Handles iterative block execution |
Executor Context
The executor maintains a comprehensive context object that tracks the state of the entire workflow execution.
interface ExecutorContext {
workflow: SerializedWorkflow
blocks: SerializedBlock[]
connections: SerializedConnection[]
blockStates: Map<string, ExecutorBlockState>
executedBlocks: Set<string>
abortSignal?: AbortSignal
workspaceId: string
executionId: string
}
Block State Management
Each block maintains its execution state through the ExecutorBlockState interface:
interface ExecutorBlockState {
output: Record<string, any>
executed: boolean
executionTime: number
}
Sources: packages/testing/src/factories/executor-context.factory.ts:1-80
The testing factory provides utilities for creating executor contexts with pre-configured blocks:
export function createExecutorContextWithBlocks(
blockOutputs: Record<string, Record<string, any>>,
options?: ExecutorContextFactoryOptions
): ExecutorContext
Sources: packages/testing/src/factories/executor-context.factory.ts:44-70
DAG Builder
The DAG (Directed Acyclic Graph) builder transforms the linear block definitions into a dependency graph that the execution engine can traverse.
Responsibilities
- Parse workflow block definitions and connection specifications
- Build adjacency lists representing block dependencies
- Validate graph structure to ensure no cycles
- Resolve input/output mappings between connected blocks
- Generate execution order using topological sorting
Key Functions
| Function | Purpose |
|---|---|
buildDAG(blocks, connections) | Constructs the dependency graph |
topologicalSort() | Determines safe execution order |
getDependencies(blockId) | Retrieves all blocks that must execute first |
getDependents(blockId) | Finds blocks that depend on this block |
Execution Engine
The execution engine is the central coordinator that manages the lifecycle of workflow execution from start to completion.
Execution Flow
sequenceDiagram
participant Client
participant Engine
participant Executor
participant Orchestrator
participant Block
Client->>Engine: execute(workflow, context)
Engine->>Executor: prepare(workflow)
Executor->>Engine: DAG Ready
Engine->>Engine: determineStartBlocks()
Engine->>Orchestrator: executeNextBatch()
Orchestrator->>Block: execute(block)
Block-->>Orchestrator: result
Orchestrator->>Engine: blockComplete()
Engine->>Engine: updateContext()
Engine->>Orchestrator: executeNextBatch()
Orchestrator-->>Engine: batchComplete
Engine-->>Client: executionResultTrigger Classification
The engine classifies workflow start conditions to determine execution entry points:
class TriggerClassifier {
static isManualTrigger(block: { type: string; subBlocks?: any }): boolean
static isApiTrigger(block: { type: string; subBlocks?: any }, isChildWorkflow?: boolean): boolean
}
Sources: apps/sim/lib/workflows/triggers/triggers.ts:1-60
Supported trigger types:
| Trigger Type | Description | Entry Mode |
|---|---|---|
INPUT | Form or manual input trigger | Manual |
MANUAL | Explicit manual execution | Manual |
START | New unified start block | Manual/API |
API | API endpoint trigger | API |
STARTER | Legacy starter block | Manual/API based on startWorkflow value |
Sources: apps/sim/lib/workflows/triggers/triggers.ts:1-75
Block Executor
The block executor handles the actual execution of individual workflow blocks, managing their lifecycle from initialization through completion.
Execution Pipeline
- Block Identification - Resolve block type and configuration
- Input Resolution - Collect outputs from dependent blocks
- Sandbox Preparation - Set up isolated execution environment
- Execution - Run the block's logic
- Output Capture - Collect and store block results
- State Update - Update executor context with results
Block States
stateDiagram-v2
[*] --> Pending
Pending --> Running: executionStart
Running --> Completed: success
Running --> Failed: error
Running --> Cancelled: abortSignal
Completed --> [*]
Failed --> [*]
Cancelled --> [*]Async Tool Execution
For client-executable tools (running in the browser), the executor uses an async confirmation pattern:
async function reportCompletion(
toolCallId: string,
status: AsyncConfirmationStatus,
message?: string,
data?: AsyncCompletionData
): Promise<void>
Sources: apps/sim/lib/copilot/tools/client/run-tool-execution.ts:1-50
The executor reports completion via the /api/copilot/confirm endpoint, which persists the durable async-tool row and wakes server-side waiters.
Parallel Orchestrator
The parallel orchestrator manages concurrent execution of independent blocks, maximizing throughput while respecting dependency constraints.
Concurrency Model
graph LR
A[Block A] --> C[Block C]
B[Block B] --> C
A --> D[Block D]
B --> D
C --> E[Block E]
D --> EConfiguration Options
| Option | Type | Default | Description |
|---|---|---|---|
maxConcurrency | number | 10 | Maximum parallel block executions |
timeout | number | 300000 | Per-block execution timeout (ms) |
failFast | boolean | true | Stop on first failure |
Loop Orchestrator
The loop orchestrator handles iterative execution patterns, supporting standard loops and parallel-for constructs.
Loop Types
| Loop Type | Description |
|---|---|
for | Standard iteration over items |
while | Conditional iteration |
parallel-for | Concurrent iteration with result aggregation |
Loop Configuration
interface LoopConfig {
loopType: 'for' | 'while' | 'parallel-for'
iterations?: number
items?: any[]
condition?: string
maxConcurrency?: number
}
Sources: apps/realtime/src/database/operations.ts:1-50
Loop Block Structure
interface LoopBlock {
id: string
type: 'loop'
config: LoopConfig
nodes: SerializedBlock[] // Blocks inside the loop
}
Default loop configuration:
const DEFAULT_LOOP_ITERATIONS = 10
Sources: apps/realtime/src/database/operations.ts:1-50
Execution Context Factory
The testing framework provides factory functions for creating executor contexts with predefined states:
Core Factory Functions
| Function | Purpose |
|---|---|
createExecutorContext() | Creates a base executor context |
createExecutorContextWithBlocks() | Creates context with pre-executed blocks |
addBlockState() | Adds block state to existing context (chainable) |
createMinimalWorkflow() | Creates a minimal workflow for testing |
Usage Example
const ctx = createExecutorContextWithBlocks({
'source-block': { value: 10, text: 'hello' },
'other-block': { result: true }
})
Sources: packages/testing/src/factories/executor-context.factory.ts:44-70
Error Handling & Resilience
Cancellation Guards
The executor implements SQL-level guards to prevent race conditions during state updates:
const cancellationGuard = bypassStaleWorker ? undefined : { groupId, executionId }
Sources: apps/sim/lib/table/cell-write.ts:1-40
Abort Signal Support
All executor operations respect AbortSignal for graceful cancellation:
interface ExecutorContext {
abortSignal?: AbortSignal
}
Skip Conditions
The executor skips writes under specific conditions to maintain consistency:
| Condition | Action |
|---|---|
| Same execution already running | Skip queued stamp |
| Cancelled state with newer execution | Skip group write |
| SQL guard conflict | Skip with logging |
Configuration Constants
| Constant | Value | Description |
|---|---|---|
BLOCK_DIMENSIONS.MIN_HEIGHT | 100 | Minimum block visual height |
DEFAULT_LOOP_ITERATIONS | 10 | Default loop iteration count |
DEFAULT_TIMEOUT | 300000 | Default block execution timeout |
Sources: apps/sim/executor/constants
Extension Points
Custom Orchestrators
The orchestrator system is designed for extensibility. New orchestrators can be registered by implementing the Orchestrator interface:
interface Orchestrator {
execute(blocks: SerializedBlock[], context: ExecutorContext): Promise<void>
cancel(): void
}
Block Output Handlers
Block output handling can be customized through the getEffectiveBlockOutputs extension point.
Related Documentation
- Workflow Triggers - Trigger system integration
- DAG Builder - Graph construction details
- Execution API - Runtime API reference
- Testing Utilities - Test factory documentation
Sources: [packages/testing/src/factories/executor-context.factory.ts:1-80]()
Workflow Blocks System
Related topics: Integrations and Connectors, Workflow Executor Engine
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Integrations and Connectors, Workflow Executor Engine
Workflow Blocks System
Overview
The Workflow Blocks System is the foundational architecture for building and executing automation workflows in the Sim platform. Blocks are the atomic units of execution that represent discrete operations, triggers, or control flow structures within a workflow. Each block encapsulates its own configuration, state, inputs, and outputs, allowing complex business logic to be constructed through visual composition or programmatically.
The system provides a declarative model where workflows are composed of interconnected blocks, with edges defining the data flow and execution order between them. This architecture enables both visual workflow design in the Sim editor and programmatic workflow manipulation through APIs.
Sources: apps/sim/lib/copilot/tools/server/workflow/edit-workflow/builders.ts:1-50
Block Architecture
Core Components
A block in the system consists of several key components:
| Component | Description |
|---|---|
id | Unique identifier for the block within a workflow |
type | The block type identifier (e.g., 'agent', 'trigger', 'loop') |
name | Display name shown in the UI |
position | Coordinates for visual placement (x, y) |
enabled | Boolean flag controlling whether the block executes |
subBlocks | Nested configuration objects with mode-based visibility |
outputs | Execution results produced by the block |
data | Arbitrary data associated with the block |
metadata | Additional metadata including block type references |
Sources: packages/workflow-persistence/src/load.ts:20-45
Block State Model
The block state represents the complete runtime and configuration state of a block:
interface BlockState {
id: string
type: string
name: string
position: { x: number; y: number }
enabled: boolean
horizontalHandles: boolean
advancedMode: boolean
triggerMode: boolean
height: number
subBlocks: Record<string, SubBlockState>
outputs: Record<string, any>
data: Record<string, any>
locked: boolean
}
Sources: packages/workflow-persistence/src/load.ts:18-35
Block Types
Trigger Blocks
Trigger blocks initiate workflow execution and define how workflows can be started. The system supports multiple trigger types:
| Trigger Type | Constant | Description |
|---|---|---|
| Start | TRIGGER_TYPES.START | Primary entry point for workflows |
| API | TRIGGER_TYPES.API | HTTP API triggered execution |
| Chat | TRIGGER_TYPES.CHAT | Conversational trigger |
| Manual | TRIGGER_TYPES.MANUAL | Manual invocation |
| Input | TRIGGER_TYPES.INPUT | Input parameter trigger |
| Webhook | TRIGGER_TYPES.WEBHOOK | Webhook-based triggers |
| Schedule | TRIGGER_TYPES.SCHEDULE | Time-based triggers |
| Generic Webhook | TRIGGER_TYPES.GENERIC_WEBHOOK | Universal webhook receiver |
Sources: apps/sim/lib/workflows/triggers/triggers.ts:1-30
Control Flow Blocks
Control flow blocks manage execution logic and flow:
| Block Type | Purpose |
|---|---|
loop | Iteration control (for loops) |
parallel | Parallel execution branches |
The loop block stores its configuration in a separate workflowSubflows table with structure:
{
id: string
workflowId: string
type: 'loop'
config: {
loopType: 'for'
iterations: number
nodes: string[]
}
}
Sources: apps/realtime/src/database/operations.ts:1-40
SubBlock Modes
SubBlocks support different visibility modes that control their appearance in the UI:
| Mode | Behavior |
|---|---|
basic | Shown in basic mode, hidden in advanced mode |
advanced | Shown in advanced mode, hidden in basic mode |
trigger | Visible only when trigger mode is enabled |
trigger-advanced | Visible in trigger mode with advanced options |
The visibility logic is implemented in the shouldUseSubBlockForTriggerModeCanonicalIndex function:
export function isTriggerModeSubBlock(subBlock: Pick<SubBlockConfig, 'mode'>): boolean {
return subBlock.mode === 'trigger' || subBlock.mode === 'trigger-advanced'
}
export function isTriggerConfigSubBlock(subBlock: Pick<SubBlockConfig, 'type'>): boolean {
return String(subBlock.type) === 'trigger-config'
}
Sources: apps/sim/lib/workflows/subblocks/visibility.ts:1-35
Trigger System
Trigger Classification
The trigger system classifies blocks based on their execution context:
graph TD
A[Block Type] --> B{is Trigger Block?}
B -->|Yes| C[Explicit Trigger]
B -->|No| D{has triggerMode?}
D -->|Yes| E[Tool with Trigger]
D -->|No| F[Regular Block]The TriggerUtils class provides static methods for trigger identification:
export class TriggerUtils {
static isTriggerBlock(block: { type: string; triggerMode?: boolean }): boolean {
const blockConfig = getBlock(block.type)
return (
blockConfig?.category === 'triggers' ||
block.triggerMode === true ||
block.type === TRIGGER_TYPES.STARTER
)
}
static isTriggerType(block: { type: string }, triggerType: TriggerType): boolean {
return block.type === triggerType
}
}
Sources: apps/sim/lib/workflows/triggers/triggers.ts:80-100
Start Block Resolution
The system uses a priority-based approach to resolve start candidates for workflow execution:
graph TD
A[Blocks Collection] --> B[Filter Disabled Blocks]
B --> C[Classify Start Block Path]
C --> D{Is Child Workflow?}
D -->|Yes| E[Apply CHILD_PRIORITIES]
D -->|No| F[Apply EXECUTION_PRIORITIES]
E --> G[Return Sorted Candidates]
F --> GThe resolveStartCandidates function implements this logic:
export function resolveStartCandidates<T extends MinimalBlock>(
blocks: Record<string, T> | T[],
options: ResolveStartOptions
): StartBlockCandidate<T>[] {
const entries = toEntries(blocks)
if (entries.length === 0) return []
const priorities = options.isChildWorkflow
? CHILD_PRIORITIES
: EXECUTION_PRIORITIES[options.execution]
// ... filtering and candidate creation logic
}
Sources: apps/sim/lib/workflows/triggers/triggers.ts:150-180
Trigger Reference Aliases
The system maps reference aliases to concrete trigger types:
export const TRIGGER_REFERENCE_ALIAS_MAP = {
start: TRIGGER_TYPES.START,
api: TRIGGER_TYPES.API,
chat: TRIGGER_TYPES.CHAT,
manual: TRIGGER_TYPES.START,
} as const
These aliases are used in inline references like <api.*>, <chat.*>, enabling flexible trigger referencing within workflows.
Sources: apps/sim/lib/workflows/triggers/triggers.ts:60-68
Block Configuration
SubBlock Structure
SubBlocks allow nested configuration within blocks:
export function applyTriggerConfigToBlockSubblocks(
block: any,
triggerConfig: Record<string, any>
) {
if (!block?.subBlocks || !triggerConfig) return
Object.entries(triggerConfig).forEach(([configKey, configValue]) => {
const existingSubblock = block.subBlocks[configKey]
if (existingSubblock) {
// Compare values to avoid unnecessary updates
const valuesEqual = /* comparison logic */
if (!valuesEqual) {
block.subBlocks[configKey] = {
...existingSubblock,
value: configValue,
}
}
} else {
// Create new subblock
block.subBlocks[configKey] = { /* new subblock */ }
}
})
}
Sources: apps/sim/lib/copilot/tools/server/workflow/edit-workflow/builders.ts:60-95
Trigger Mode Visibility
The system determines which blocks appear in different toolbar sections:
graph TD
A[All Blocks] --> B{Is Hidden?}
B -->|Yes| C[Exclude]
B -->|No| D{Category === 'triggers'?}
D -->|Yes| E[Include in Triggers Tab]
D -->|No| F{Has Trigger Capability?}
F -->|Yes| G[Include in Both Tabs]
F -->|No| H[Include in Blocks Tab]Functions getTriggersForSidebar() and getBlocksForSidebar() implement this filtering logic, excluding blocks with hideFromToolbar: true and treating blocks with trigger capability differently from those with explicit trigger category.
Sources: apps/sim/lib/workflows/triggers/trigger-utils.ts:1-40
Block Reference Tags
Block tags control how blocks are referenced in the system:
export function getBlockTags(block: BlockConfig): string[] {
const normalizedBlockName = /* normalization logic */
let blockTags = allTags
const shouldShowRootTag =
block.type === TRIGGER_TYPES.GENERIC_WEBHOOK ||
block.type === 'start_trigger'
if (!shouldShowRootTag) {
blockTags = blockTags.filter((tag) => tag !== normalizedBlockName)
}
return blockTags
}
Sources: apps/sim/lib/workflows/blocks/block-reference-tags.ts:1-30
File Input Processing
Blocks can handle file inputs through a specialized processing system:
export async function processInputFileFields(
input: unknown,
blocks: SerializedBlock[],
executionContext: { workspaceId: string; workflowId: string; executionId: string },
requestId: string,
userId?: string
): Promise<unknown> {
// Find start block to extract input format
const startBlock = blocks.find((block) => {
const blockType = block.metadata?.id
return (
blockType === TRIGGER_TYPES.START ||
blockType === TRIGGER_TYPES.API ||
blockType === TRIGGER_TYPES.INPUT ||
blockType === TRIGGER_TYPES.GENERIC_WEBHOOK ||
blockType === TRIGGER_TYPES.STARTER
)
})
// Process file fields from input format
const fileFields = inputFormat.filter((field) => field.type === 'file[]')
// ... file processing logic
}
Sources: apps/sim/lib/execution/files.ts:1-60
Persistence Layer
Database Schema
Block data is persisted using a hybrid approach:
| Table | Purpose |
|---|---|
workflowBlocks | Core block configuration and state |
workflowSubflows | Loop and parallel block specific configs |
workflowEdges | Connections between blocks |
The upsert operation for blocks:
await tx
.insert(workflowBlocks)
.values(blockValues)
.onConflictDoUpdate({
target: workflowBlocks.id,
set: {
type: sql`excluded.type`,
name: sql`excluded.name`,
positionX: sql`excluded.position_x`,
positionY: sql`excluded.position_y`,
enabled: sql`excluded.enabled`,
subBlocks: sql`excluded.sub_blocks`,
outputs: sql`excluded.outputs`,
data: sql`excluded.data`,
updatedAt: sql`now()`,
},
})
Sources: apps/realtime/src/database/operations.ts:1-50
Loading and Assembly
When loading a workflow, blocks are assembled from database records:
blocks.forEach((block) => {
const blockData = (block.data ?? {}) as BlockState['data']
const assembled: BlockState = {
id: block.id,
type: block.type,
name: block.name,
position: {
x: Number(block.positionX),
y: Number(block.positionY),
},
enabled: block.enabled,
subBlocks: (block.subBlocks as BlockState['subBlocks']) || {},
outputs: (block.outputs as BlockState['outputs']) || {},
data: blockData,
locked: block.locked,
}
blocksMap[block.id] = assembled
})
Subflows (loops and parallels) are loaded separately:
subflows.forEach((subflow) => {
const config = (subflow.config ?? {}) as Partial<Loop & Parallel>
if (subflow.type === SUBFLOW_TYPES.LOOP) {
loops[subflow.id] = config as Loop
} else if (subflow.type === SUBFLOW_TYPES.PARALLEL) {
parallels[subflow.id] = config as Parallel
}
})
Sources: packages/workflow-persistence/src/load.ts:40-80
Testing Infrastructure
Block Assertions
The testing package provides assertion utilities for workflow validation:
export function expectBlockEnabled(
blocks: Record<string, any>,
blockId: string
): void {
const block = blocks[blockId]
expect(block, `Block "${blockId}" should exist`).toBeDefined()
expect(block.enabled, `Block "${blockId}" should be enabled`).toBe(true)
}
export function expectBlockPosition(
blocks: Record<string, any>,
blockId: string,
expectedPosition: { x: number; y: number }
): void {
const block = blocks[blockId]
expect(block, `Block "${blockId}" should exist`).toBeDefined()
expect(block.position.x, `Block "${blockId}" x position`)
.toBeCloseTo(expectedPosition.x, 0)
expect(block.position.y, `Block "${blockId}" y position`)
.toBeCloseTo(expectedPosition.y, 0)
}
Sources: packages/testing/src/assertions/workflow.assertions.ts:1-60
Workflow Factories
Test utilities create standard workflow structures:
export function createLinearWorkflow(
blockCount: number,
spacing = 200
): any {
const blocks: Record<string, any> = {}
const blockIds: string[] = []
for (let i = 0; i < blockCount; i++) {
const id = `block-${i}`
blockIds.push(id)
if (i === 0) {
blocks[id] = createStarterBlock({ id, position: { x: i * spacing, y: 0 } })
} else {
blocks[id] = createFunctionBlock({ id, name: `Step ${i}`, position: { x: i * spacing, y: 0 } })
}
}
return createWorkflowState({ blocks, edges: createLinearEdges(blockIds) })
}
Sources: packages/testing/src/factories/workflow.factory.ts:1-40
Summary
The Workflow Blocks System provides a comprehensive foundation for building automation workflows through:
- Declarative Block Model: Each block encapsulates its own configuration, state, and outputs
- Flexible Trigger System: Multiple trigger types support various execution entry points
- Mode-Based Visibility: SubBlocks can be shown/hidden based on basic, advanced, or trigger modes
- Persistent State: Blocks are persisted to the database with proper upsert semantics
- Type-Safe Testing: Comprehensive assertion utilities enable robust workflow testing
The architecture separates concerns between block definition, execution, persistence, and testing, enabling a clean and maintainable codebase for workflow automation.
Sources: [apps/sim/lib/copilot/tools/server/workflow/edit-workflow/builders.ts:1-50]()
Integrations and Connectors
Related topics: Workflow Blocks System, Background Jobs and Background Processing
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Workflow Blocks System, Background Jobs and Background Processing
Integrations and Connectors
Overview
Sim provides a comprehensive integrations and connectors system that enables AI agents to interact with external services, APIs, and platforms. This system forms the backbone of Sim's workflow automation capabilities, allowing users to connect 1,000+ integrations and LLMs to orchestrate agentic workflows.
The connector architecture is designed around a plugin-based system where each integration is implemented as a self-contained module with standardized interfaces for authentication, API communication, and data transformation.
Architecture
The Sim integration system consists of several interconnected layers:
graph TD
A[Workflows / Agents] --> B[Trigger System]
B --> C[Connectors Registry]
C --> D[Individual Connectors]
D --> E[Slack Connector]
D --> F[GitHub Connector]
D --> G[Custom Connectors]
A --> H[Webhook Providers]
H --> I[Gong Webhook]
H --> J[Vercel Webhook]
C --> K[API Contracts]
K --> L[Type Definitions]
D --> M[External APIs]Core Components
| Component | Purpose | Location |
|---|---|---|
| Connectors Registry | Central hub for managing all connector instances | apps/sim/connectors/registry.ts |
| Type Definitions | Shared interfaces and types for connectors | apps/sim/connectors/types.ts |
| API Contracts | Zod schemas defining API request/response shapes | apps/sim/lib/api/contracts/types.ts |
| Trigger System | Event-driven connector activation | apps/sim/lib/workflows/triggers/triggers.ts |
| Webhook Providers | Inbound integration handlers | apps/sim/lib/webhooks/providers/ |
Connector Registry
The connector registry (apps/sim/connectors/registry.ts) serves as the central management system for all connectors in the platform. It provides:
- Registration and lookup of connector instances
- Configuration management for each connector
- Lifecycle management (initialize, authenticate, execute, cleanup)
- Unified interface for accessing connector functionality
graph LR
A[Request] --> B[Registry Lookup]
B --> C{Connector Found?}
C -->|Yes| D[Execute Connector]
C -->|No| E[Return Error]
D --> F[Transform Response]
F --> G[Return to Caller]Trigger System
Triggers work in conjunction with connectors to activate workflows based on external events. The trigger system classifies different activation modes:
graph TD
A[Block] --> B{Start Workflow Mode}
B -->|chat| C[Chat Trigger]
B -->|api| D[API Trigger]
B -->|run| E[API Trigger]
B -->|manual| F[Manual Trigger]
B -->|undefined| FTrigger Types
| Trigger Type | Classification | Use Case |
|---|---|---|
start | Manual/API | Initial workflow activation |
api | API-based | Programmatic workflow execution |
chat | Conversational | Chat-initiated workflows |
manual | User-initiated | Manual workflow triggers |
The trigger reference alias map provides convenient access to trigger types:
export const TRIGGER_REFERENCE_ALIAS_MAP = {
start: TRIGGER_TYPES.START,
api: TRIGGER_TYPES.API,
chat: TRIGGER_TYPES.CHAT,
manual: TRIGGER_TYPES.START,
} as const
Sources: apps/sim/lib/workflows/triggers/triggers.ts:32-37
Webhook Providers
Sim integrates with external services through webhook providers that normalize incoming events into a standardized format.
Gong Webhook Integration
The Gong webhook provider handles call recording and analytics data:
interface GongWebhookPayload {
callId: string
metaData: Record<string, unknown>
parties: unknown[]
context: unknown[]
trackers: unknown[]
topics: unknown[]
highlights: unknown[]
eventType: 'gong.automation_rule'
}
Sources: apps/sim/lib/webhooks/providers/gong.ts:1-15
Vercel Webhook Integration
The Vercel webhook provider processes deployment events with comprehensive metadata extraction:
interface VercelDeploymentData {
id: string
url: string
name: string
meta: Record<string, unknown>
project?: {
id: string
name: string
}
team?: {
id: string
}
user?: {
id: string
}
target?: string
plan?: string
}
Sources: apps/sim/lib/webhooks/providers/vercel.ts:25-40
Slack Integration
Slack is a first-class citizen in Sim's connector ecosystem, with comprehensive API contract definitions:
Slack API Contracts
| Contract | Purpose | Response Type |
|---|---|---|
slackReadMessagesContract | Fetch messages from channels | SlackReadMessagesResponse |
slackAddReactionContract | Add reactions to messages | SlackReactionResponse |
slackDeleteMessageContract | Delete messages | SlackDeleteMessageResponse |
slackUpdateMessageContract | Edit existing messages | SlackUpdateMessageResponse |
slackSendEphemeralContract | Send ephemeral messages | SlackSendEphemeralResponse |
slackDownloadContract | Download files/content | SlackDownloadResponse |
export type SlackReadMessagesResponse = ContractJsonResponse<typeof slackReadMessagesContract>
export type SlackReactionResponse = ContractJsonResponse<typeof slackAddReactionContract>
export type SlackDeleteMessageResponse = ContractJsonResponse<typeof slackDeleteMessageContract>
export type SlackUpdateMessageResponse = ContractJsonResponse<typeof slackUpdateMessageContract>
export type SlackSendEphemeralResponse = ContractJsonResponse<typeof slackSendEphemeralContract>
export type SlackDownloadResponse = ContractJsonResponse<typeof slackDownloadContract>
Sources: apps/sim/lib/api/contracts/tools/communication/slack.ts:1-10
API Contract System
The API contract system uses Zod schemas for runtime validation of all API interactions:
Contract Type Utilities
export type ContractParams<C extends AnyApiRouteContract> = C extends ApiRouteContract<
infer TParams,
ApiSchema | undefined,
ApiSchema | undefined,
ApiSchema | undefined,
ResponseMode,
ApiSchema | undefined
>
? EmptySchemaOutput<TParams>
: undefined
export type ContractBody<C extends AnyApiRouteContract> = C extends ApiRouteContract<
ApiSchema | undefined,
ApiSchema | undefined,
infer TBody,
ApiSchema | undefined,
ResponseMode,
ApiSchema | undefined
>
? EmptySchemaOutput<TBody>
: undefined
Sources: apps/sim/lib/api/contracts/types.ts:1-30
Generic Contract Types
| Type | Description |
|---|---|
ContractParams<C> | Extracted URL parameter types from contract |
ContractQuery<C> | Extracted query string types from contract |
ContractBody<C> | Extracted request body types from contract |
ContractHeaders<C> | Extracted header types from contract |
ContractParamsInput<C> | Input types for contract parameters |
Virtual Filesystem Integration
Connectors are exposed to AI agents through the virtual filesystem (VFS), which materializes workspace data into an in-memory file system structure:
graph TD
A[Workspace] --> B[Virtual Filesystem]
B --> C[workflows/{name}/meta.json]
B --> D[workflows/{name}/state.json]
B --> E[workflows/{name}/executions.json]
B --> F[knowledgebases/{name}/meta.json]
B --> G[connectors.json]
B --> H[triggers/{id}.json]The VFS exposes connector configurations to agents:
files.set(
'knowledgebases/{name}/connectors.json',
serializeConnectorConfigs(connectorConfigs)
)
Sources: apps/sim/lib/copilot/vfs/workspace-vfs.ts:1-50
Connector Configuration
Connectors follow a standardized configuration schema defined in apps/sim/connectors/types.ts. Each connector instance includes:
- Provider: The external service name (e.g.,
slack,github) - Credentials: Authentication tokens and secrets
- Settings: Connector-specific configuration options
- Metadata: Display name, description, category
Adding New Connectors
To add a new connector to the Sim platform:
- Create a new directory under
apps/sim/connectors/{provider}/ - Implement the connector class with required interface methods
- Register the connector in the registry
- Define API contracts in
apps/sim/lib/api/contracts/ - Add webhook handler if inbound events are needed
- Update the VFS serialization logic if agent access is required
Best Practices
- Always use API contracts for type-safe API calls
- Implement proper error handling and retry logic
- Store credentials securely (never commit to repository)
- Follow the trigger classification pattern for event-driven workflows
- Use the virtual filesystem for any data that should be accessible to agents
Sources: [apps/sim/lib/workflows/triggers/triggers.ts:32-37]()
Agent System
Related topics: Copilot System, Workflow Blocks System
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Copilot System, Workflow Blocks System
Agent System
The Agent System is a core component of the Sim platform that enables the execution of AI agents within workflow automation. It provides the infrastructure for creating, managing, and executing agents that can interact with tools, maintain conversation context, and process complex multi-step tasks.
Overview
The Agent System serves as the execution layer for AI-driven automation within Sim workflows. It handles the lifecycle of agent execution, including initialization, tool invocation, state management, memory handling, and result processing.
Core Responsibilities
- Agent Execution Pipeline: Orchestrates the execution of agent logic within workflow contexts
- Memory Management: Maintains conversation history and context across agent interactions
- Skills Resolution: Resolves and binds available skills and tools to agent instances
- State Coordination: Manages agent state transitions and execution checkpoints
- Tool Integration: Handles the invocation and management of external tools and APIs
Architecture
graph TD
A[Workflow Engine] --> B[Agent Handler]
B --> C[Memory Manager]
B --> D[Skills Resolver]
B --> E[Tool Executor]
C --> F[Context Store]
D --> G[Block Registry]
E --> H[Execution Context]
H --> I[Streaming Response]
F --> HAgent Handler
The agent-handler.ts serves as the primary orchestrator for agent execution. It manages the interaction between the workflow engine and the agent's internal components.
Key Functions
| Function | Purpose | Source |
|---|---|---|
handleAgentExecution | Main entry point for agent processing | agent-handler.ts |
executeToolAndReport | Invokes tools and streams results back | tool.ts:52 |
registerPendingToolPromise | Tracks async tool executions | tool.ts:45 |
abortPendingToolIfStreamDead | Handles stalled tool executions | tool.ts:66 |
Execution Flow
sequenceDiagram
participant Workflow
participant AgentHandler
participant ToolExecutor
participant Memory
participant SkillsResolver
Workflow->>AgentHandler: Execute Agent
AgentHandler->>SkillsResolver: Resolve Available Skills
SkillsResolver-->>AgentHandler: Skill Bindings
AgentHandler->>Memory: Initialize Context
Memory-->>AgentHandler: Context State
AgentHandler->>ToolExecutor: Invoke Tool
ToolExecutor-->>AgentHandler: Tool Result
AgentHandler->>Memory: Update State
AgentHandler-->>Workflow: Execution ResultTool Execution Modes
The agent handler supports multiple execution modes for tool invocation:
| Mode | Description | Configuration |
|---|---|---|
autoExecuteTools | Automatically execute tools without user confirmation | options.autoExecuteTools !== false |
interactive | Require user confirmation before tool execution | options.interactive === true |
parallel | Execute multiple tools concurrently | Parallel promise registration |
clientExecutable | Delegate execution to client workflow | clientExecutable === true |
Memory System
The Memory System (memory.ts) maintains conversation context and execution history for agents, enabling stateful interactions across multiple workflow steps.
Memory Operations
interface AgentMemory {
conversationHistory: ConversationTurn[]
executionContext: Record<string, any>
blockOutputs: Map<string, BlockOutput>
timestamps: MemoryTimestamps
}
Context Management
| Operation | Description | Source Reference |
|---|---|---|
| Store Turn | Save a conversation interaction | memory.ts |
| Retrieve Context | Load previous state for agent | memory.ts |
| Clear Memory | Reset context for new session | memory.ts |
| Merge Context | Combine multiple context sources | memory.ts |
Skills Resolver
The Skills Resolver (skills-resolver.ts) binds available tools and capabilities to agent instances based on workflow configuration and agent requirements.
Resolution Process
- Skill Discovery: Scan available tool registry for compatible skills
- Capability Matching: Match agent requirements with available tools
- Binding: Create stable references between agent and tools
- Validation: Verify all required skills are available
Skill Configuration
interface SkillBinding {
skillId: string
toolName: string
parameters: SkillParameters
enabled: boolean
priority: number
}
Type System
The Agent System defines comprehensive TypeScript types in types.ts to ensure type safety across all components.
Core Types
| Type | Description | Usage |
|---|---|---|
StreamingContext | Manages streaming response state | Tool execution tracking |
ExecutionContext | Holds runtime execution data | Block state, outputs |
OrchestratorOptions | Configuration for agent behavior | Execution parameters |
ToolScope | Defines execution scope | main or subagent |
ToolCallState | Tracks individual tool call status | Execution monitoring |
Tool Call States
stateDiagram-v2
[*] --> pending: Tool Call Created
pending --> executing: Execution Started
executing --> success: Completed Successfully
executing --> error: Execution Failed
executing --> cancelled: User Cancelled
success --> skipped: Result Not Needed
error --> retry: Retry AttemptAgent Block Definition
The Agent Block (agent.ts) defines the block-level configuration and metadata for agents within the Sim workflow system.
Block Structure
interface AgentBlockConfig {
name: string
description: string
category: BlockCategory
inputs: InputSpecification[]
outputs: OutputSpecification[]
parameters: AgentParameters
}
Block Categories
| Category | Description | Example Usage |
|---|---|---|
agent | Primary agent implementation | Main workflow agent |
subagent | Nested agent for delegation | Specialized task agents |
client | Client-delegated execution | External system integration |
Execution Context Factory
The executor-context.factory.ts provides utilities for creating and manipulating executor contexts used throughout the agent system.
Factory Functions
| Function | Purpose | Source |
|---|---|---|
createExecutorContext | Initialize new execution context | executor-context.factory.ts |
createExecutorContextWithBlocks | Create context with pre-executed blocks | executor-context.factory.ts |
addBlockState | Add block state to existing context | executor-context.factory.ts |
createMinimalWorkflow | Create workflow for context | executor-context.factory.ts |
Executor Context Structure
interface ExecutorContext {
blockStates: Map<string, ExecutorBlockState>
executedBlocks: Set<string>
workflow: SerializedWorkflow
connections: SerializedConnection[]
requestId: string
abortSignal?: AbortSignal
}
Tool Execution Pipeline
graph LR
A[Tool Call Request] --> B{Interactive Mode?}
B -->|Yes| C{Client Executable?}
B -->|No| D{Auto Execute?}
C -->|Workflow Tool| E[Delegate to Client]
C -->|Sim Executed| F[Execute Tool]
D -->|Yes| G[Fire Tool Execution]
D -->|No| H[Wait for Confirmation]
E --> I[Update Tool State]
F --> I
G --> I
H --> I
I --> J[Report Result]
J --> K[Update Memory]Result Handling
The agent system handles multiple outcome types for tool executions:
| Outcome | Status | Description |
|---|---|---|
success | Completed successfully | Tool executed without errors |
error | Execution failed | Tool encountered an error |
cancelled | User cancelled | Execution was manually stopped |
skipped | Not needed | Result was no longer required |
Configuration Options
Orchestrator Options
interface OrchestratorOptions {
interactive?: boolean // Require user confirmation
autoExecuteTools?: boolean // Auto-execute without prompt
abortSignal?: AbortSignal // Cancellation token
timeout?: number // Execution timeout
}
Streaming Context
interface StreamingContext {
requestId: string
toolCalls: Map<string, ToolCallState>
pendingPromises: Map<string, Promise<ToolResult>>
onToolResult?: (result: ToolResult) => void
}
Error Handling
The agent system implements comprehensive error handling across all execution paths:
- Tool Execution Errors: Caught and wrapped in standard error format
- Stream Dead Detection: Aborts pending tools when stream becomes unresponsive
- Timeout Handling: Respects abort signals for long-running operations
- State Validation: Ensures consistency before state transitions
Error Response Format
interface ToolErrorResponse {
status: MothershipStreamV1ToolOutcome.error
message: string
data: {
error: string
}
}
Integration Points
The Agent System integrates with multiple platform components:
| Component | Integration Type | Data Flow |
|---|---|---|
| Workflow Engine | Parent orchestrator | Initializes agent execution |
| Block Registry | Tool resolution | Discovers available skills |
| Memory Store | State persistence | Maintains conversation history |
| Webhook Providers | External triggers | Receives external events |
| Billing System | Usage tracking | Records execution metrics |
Testing
The Agent System includes comprehensive test coverage in agent-handler.test.ts and related test files, covering:
- Tool execution scenarios (success, failure, cancellation)
- Memory state management
- Skills resolution logic
- Streaming response handling
- Error propagation paths
Source: https://github.com/simstudioai/sim / Human Manual
Copilot System
Related topics: Agent System, Architecture Overview
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Agent System, Architecture Overview
Copilot System
The Copilot System is a Sim-managed AI-powered service that enables users to generate workflow nodes, fix errors, and iterate on flows directly from natural language instructions. It serves as an intelligent assistant embedded within the Sim platform, providing real-time assistance for workflow creation and modification.
Overview
Copilot acts as an intelligent layer between users and the workflow engine, translating natural language inputs into executable workflow components. The system leverages large language models to understand user intent and generate appropriate code blocks, connections, and configurations within the Sim workflow environment.
Key Capabilities
| Capability | Description |
|---|---|
| Node Generation | Create new workflow blocks from natural language descriptions |
| Error Resolution | Identify and fix issues in existing workflows |
| Flow Iteration | Modify and improve workflow structures through conversational commands |
| Analytics Tracking | Monitor all Copilot operations for performance and billing |
Architecture
The Copilot System consists of multiple API endpoints and tracking components that work together to provide a seamless AI-assisted experience.
graph TD
A[User Input] --> B[Copilot API Layer]
B --> C[Chat Stream Route]
B --> D[Checkpoints Route]
B --> E[Models Route]
B --> F[Training Route]
C --> G[Trace Span Tracking]
G --> H[Analytics Collection]
H --> I[Billing System]API Endpoints
The Copilot System exposes several REST API endpoints for different operations:
Chat Stream Endpoint
Handles real-time streaming of chat responses for Copilot interactions. This endpoint manages the bidirectional communication between the client and the AI model, providing immediate feedback as the Copilot processes natural language requests.
Checkpoints Route
Provides functionality for saving and retrieving workflow checkpoints during Copilot-assisted editing. This allows users to maintain version history and revert to previous states if needed.
Models Route
Manages the available AI models that power Copilot functionality. The system supports multiple model configurations and allows for dynamic model selection based on task requirements.
Training Route
Handles model fine-tuning and custom training workflows. This endpoint enables the system to learn from user interactions and improve response accuracy over time.
Trace Span Instrumentation
The Copilot System implements comprehensive OpenTelemetry trace spans for observability and monitoring. All trace span identifiers are defined in a generated contract file that ensures type safety and consistency between the frontend and backend.
Trace Span Categories
| Category | Spans | Purpose |
|---|---|---|
| Chat Operations | chat.* | Track conversation flow and tool usage |
| Analytics | copilot.analytics.* | Monitor request metrics and billing |
| Context Management | context.* | Track context window operations |
| Authentication | auth.* | Security and rate limiting events |
Key Trace Span Identifiers
| Identifier | Description |
|---|---|
copilot.analytics.flush | Analytics batch flush operation |
copilot.analytics.save_request | Persist individual request data |
copilot.analytics.update_billing | Update billing metrics |
chat.setup | Initialize chat session |
chat.continue_with_tool_results | Process tool execution results |
context.reduce | Context window reduction |
context.summarize_chunk | Summarize large context chunks |
auth.validate_key | API key validation |
auth.rate_limit.record | Rate limit tracking |
Integration with Workflows
Copilot integrates deeply with the Sim workflow engine through the block system. When generating nodes or fixing errors, Copilot communicates with the workflow registry to validate and persist changes.
Workflow Registry Integration
The system uses Zustand for state management when interacting with workflow data:
// Mock structure from test files
useWorkflowRegistry: {
getState: () => ({
activeWorkflowId: null,
}),
}
Block Generation
Copilot generates workflow blocks by:
- Parsing natural language input
- Identifying required block types from the registry
- Validating block configurations against schema definitions
- Persisting generated blocks to the workflow state
Self-Hosted Deployment
For self-hosted Sim instances, Copilot requires separate configuration:
API Key Setup
- Navigate to https://sim.ai
- Go to Settings → Copilot
- Generate a Copilot API key
- Set the
COPILOT_API_KEYenvironment variable inapps/sim/.env
# Example environment variable
COPILOT_API_KEY=your_generated_api_key_here
Environment Configuration
The Copilot API key must be configured alongside other environment variables defined in the project's .env.example file. The system validates the API key on each request through the auth.validate_key trace span.
Analytics and Billing
The Copilot System implements a comprehensive analytics pipeline:
Analytics Flow
graph LR
A[User Request] --> B[Save Request]
B --> C[Update Billing]
C --> D[Flush Analytics]
D --> E[Persist to Storage]Tracked Metrics
| Metric | Trace Span | Description |
|---|---|---|
| Request Count | copilot.analytics.save_request | Individual Copilot invocations |
| Billing Units | copilot.analytics.update_billing | Usage-based billing data |
| Flush Events | copilot.analytics.flush | Batch processing completion |
Error Handling
The Copilot System handles various error scenarios through dedicated trace spans:
| Error Type | Trace Span | Handling |
|---|---|---|
| Explicit Abort | chat.explicit_abort.* | Graceful termination of requests |
| Rate Limiting | auth.rate_limit.record | Throttling and quota enforcement |
| Auth Failures | auth.validate_key | Invalid API key rejection |
Dependencies
The Copilot functionality is managed through the Bun workspace and depends on the following core packages:
- Next.js (App Router) - API route handling
- Drizzle ORM - Data persistence
- Zod - Schema validation for API contracts
- Zustand - Client-side state management
Development dependencies include the documentation generator script located at scripts/generate-docs.ts which can be run with bun run generate-docs.
Summary
The Copilot System provides intelligent assistance for workflow creation and modification within Sim. Through a combination of streaming chat APIs, comprehensive trace instrumentation, and deep workflow integration, it enables natural language-driven development experiences. The system is designed for both cloud-hosted and self-hosted deployments, with full observability through OpenTelemetry trace spans and analytics tracking.
Source: https://github.com/simstudioai/sim / Human Manual
Deployment Guide
Related topics: Technology Stack, Background Jobs and Background Processing
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Technology Stack, Background Jobs and Background Processing
Deployment Guide
Sim is a workflow automation platform that supports multiple deployment configurations. This guide covers self-hosted deployment options including Docker, Docker Compose, manual setup, and Kubernetes via Helm charts.
Overview
Sim can be deployed in several ways depending on your infrastructure requirements and operational capabilities:
| Deployment Method | Use Case | Complexity |
|---|---|---|
| NPM Package (Docker) | Quick local testing | Low |
| Docker Compose | Single-server production | Medium |
| Manual Setup | Custom infrastructure | High |
| Helm Chart | Kubernetes clusters | Medium-High |
Prerequisites
Hardware Requirements
| Component | Minimum | Recommended |
|---|---|---|
| CPU | 2 cores | 4+ cores |
| Memory | 4 GB RAM | 8+ GB RAM |
| Disk | 20 GB | 50+ GB SSD |
| Docker | 20.10+ | Latest |
Software Requirements
- Docker must be installed and running
- Bun runtime (for manual setup)
- Node.js v20+ (for manual setup)
- PostgreSQL 12+ with pgvector extension (for manual setup)
Self-Hosted: NPM Package (Docker)
The fastest way to get started with Sim using Docker.
Quick Start
npx simstudio
This launches Sim at http://localhost:3000 Sources: README.md
Command Options
| Flag | Description | Default |
|---|---|---|
-p, --port <port> | Port to run Sim on | 3000 |
--no-pull | Skip pulling latest Docker images | - |
Example Usage
# Run on custom port
npx simstudio --port 8080
# Skip image pull (use cached images)
npx simstudio --no-pull
Self-Hosted: Docker Compose
For production deployments on a single server, use the production Docker Compose configuration.
Standard Deployment
git clone https://github.com/simstudioai/sim.git && cd sim
docker compose -f docker-compose.prod.yml up -d
Open http://localhost:3000 to access Sim Sources: README.md
Architecture
graph TB
subgraph "Docker Compose Stack"
A["Next.js App<br>:3000"] --> B["PostgreSQL<br>:5432"]
A --> C["Redis<br>:6379"]
A --> D["Realtime Service<br>:3001"]
D --> B
D --> C
end
E["External Services"] --> AServices
| Service | Image | Port | Purpose |
|---|---|---|---|
| app | simstudio/sim-app | 3000 | Main Next.js application |
| realtime | simstudio/sim-realtime | 3001 | WebSocket/real-time events |
| postgres | pgvector/pgvector | 5432 | Database with vector support |
| redis | redis:alpine | 6379 | Caching and session storage |
Self-Hosted: Local Models (Ollama/vLLM)
Sim supports local AI models via Ollama and vLLM for privacy-focused or offline deployments.
Ollama Integration
Sim integrates with Ollama to run local models for workflow execution.
git clone https://github.com/simstudioai/sim.git && cd sim
docker compose -f docker-compose.ollama.yml up -d
#### Ollama Configuration
graph LR
A["Sim App"] --> B["Ollama Service"]
B --> C["Local Models<br>llama2, mistral, etc."]#### Supported Ollama Environment Variables
| Variable | Description | Default |
|---|---|---|
OLLAMA_HOST | Ollama server URL | http://localhost:11434 |
OLLAMA_MODEL | Default model to use | - |
vLLM Integration
For high-performance local inference, configure vLLM:
# docker-compose.override.yml
services:
app:
environment:
VLLM_HOST: "http://vllm:8000"
VLLM_MODEL: "meta-llama/Llama-2-7b-hf"
See the Docker self-hosting docs for detailed setup instructions Sources: README.md
Self-Hosted: Manual Setup
For custom infrastructure or development environments, install Sim manually.
Step 1: Clone and Install Dependencies
git clone https://github.com/simstudioai/sim.git
cd sim
bun install
bun run prepare # Set up pre-commit hooks
Step 2: PostgreSQL with pgvector Setup
docker run --name simstudio-db \
-e POSTGRES_PASSWORD=your_password \
-e POSTGRES_DB=simstudio \
-p 5432:5432 \
-d \
pgvector/pgvector:pg16
Step 3: Environment Configuration
Copy the example environment file and configure:
cd apps/sim
cp .env.example .env
# Edit .env with your configuration
#### Application Environment Variables
| Variable | Description | Required |
|---|---|---|
DATABASE_URL | PostgreSQL connection string | Yes |
BETTER_AUTH_SECRET | Secret for authentication | Yes |
ENCRYPTION_KEY | Data encryption key | Yes |
NEXT_PUBLIC_APP_URL | Public application URL | Yes |
BETTER_AUTH_URL | Authentication service URL | Yes |
INTERNAL_API_SECRET | Internal API authentication | Yes |
CRON_SECRET | Cron job authentication | Yes |
Step 4: Build and Start
bun run build
bun run start
Development Environment (Dev Container)
The repository includes a pre-configured development container.
Structure
graph TB
subgraph ".devcontainer"
A["Dev Container"] --> B["PostgreSQL"]
A --> C["Redis"]
A --> D["MailHog"]
end
A --> E["Sim App"]
A --> F["Realtime Service"]Dev Container Services
| Service | Port | Purpose |
|---|---|---|
| Sim App | 3000 | Main application |
| PostgreSQL | 5432 | Database |
| Redis | 6379 | Caching |
| MailHog | 8025 | Email testing |
Helm Chart Deployment
For Kubernetes clusters, use the official Helm chart.
Installation
helm install sim ./helm/sim \
--namespace sim \
--create-namespace
Production Configuration
# values.yaml
app:
replicaCount: 3
env:
NEXT_PUBLIC_APP_URL: "https://sim.example.com"
BETTER_AUTH_URL: "https://sim.example.com"
postgresql:
auth:
database: simstudio
primary:
persistence:
size: 50Gi
monitoring:
enabled: true
prometheus:
enabled: true
Secrets Management
The Helm chart supports three methods for managing secrets, in order of production-readiness:
#### Method 1: Inline --set (Development Only)
helm install sim ./helm/sim --set app.env.BETTER_AUTH_SECRET=...
⚠️ Warning: Values set this way appear in helm get values output. Not recommended for production.
#### Method 2: Pre-existing Kubernetes Secret
kubectl create secret generic sim-app-secrets --namespace sim \
--from-literal=BETTER_AUTH_SECRET=$(openssl rand -hex 32) \
--from-literal=ENCRYPTION_KEY=$(openssl rand -hex 32) \
--from-literal=INTERNAL_API_SECRET=$(openssl rand -hex 32) \
--from-literal=CRON_SECRET=$(openssl rand -hex 32)
kubectl create secret generic sim-postgres-secret --namespace sim \
--from-literal=POSTGRES_PASSWORD=$(openssl rand -base64 24 | tr -d '/+=')
Reference secrets in values:
app:
secrets:
existingSecret:
enabled: true
name: sim-app-secrets
postgresql:
auth:
existingSecret:
enabled: true
name: sim-postgres-secret
passwordKey: POSTGRES_PASSWORD
#### Method 3: External Secrets Operator (Recommended for Production)
Integrate with AWS Secrets Manager, HashiCorp Vault, or Azure Key Vault.
Autoscaling Configuration
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 20
targetCPUUtilizationPercentage: 70
targetMemoryUtilizationPercentage: 80
When autoscaling.enabled=true, the chart omits spec.replicas from the Deployment so the HPA owns replica count. Requires metrics-server in the cluster Sources: helm/sim/README.md
Network Policy
Enable east-west isolation and block cloud metadata endpoints:
networkPolicy:
enabled: true
Key Helm Configuration Reference
| Parameter | Description | Default |
|---|---|---|
app.replicaCount | Number of app replicas | 1 |
app.image.repository | App image repository | simstudio/sim-app |
app.image.tag | App image tag | appVersion |
app.env.NEXT_PUBLIC_APP_URL | Public app URL | localhost:3000 |
app.env.BETTER_AUTH_URL | Auth service URL | localhost:3000 |
autoscaling.enabled | Enable HPA | false |
monitoring.enabled | Enable monitoring | false |
networkPolicy.enabled | Enable network policies | false |
Important URLs Configuration
⚠️ Critical:app.env.NEXT_PUBLIC_APP_URLandapp.env.BETTER_AUTH_URLmust match your public origin (e.g.,https://sim.example.com). Leaving them aslocalhostbreaks sign-in functionality.
Environment Variables Reference
Application (.env.example)
| Variable | Required | Description |
|---|---|---|
DATABASE_URL | Yes | PostgreSQL connection string |
BETTER_AUTH_SECRET | Yes | Authentication secret |
BETTER_AUTH_URL | Yes | Authentication service URL |
NEXT_PUBLIC_APP_URL | Yes | Public application URL |
ENCRYPTION_KEY | Yes | Data encryption key |
INTERNAL_API_SECRET | Yes | Internal API secret |
CRON_SECRET | Yes | Cron job secret |
REDIS_URL | No | Redis connection URL |
SOCKET_SERVER_URL | No | WebSocket server URL |
OLLAMA_URL | No | Ollama server URL |
SMTP_* | No | Email configuration |
Realtime Service (.env.example)
| Variable | Required | Description |
|---|---|---|
REDIS_URL | Yes | Redis connection URL |
DATABASE_URL | Yes | PostgreSQL connection string |
INTERNAL_API_SECRET | Yes | Internal API secret |
Testing Deployments
Load Testing
The repository includes Artillery load testing configurations:
# Workflow load testing
bunx artillery run scripts/load/workflow-waves.yml
# Isolation testing
bunx artillery run scripts/load/workflow-isolation.yml
Docker Health Checks
# Check service status
docker compose ps
# View logs
docker compose logs -f app
# Restart services
docker compose restart
Troubleshooting
Common Issues
| Issue | Solution |
|---|---|
| Sign-in fails | Verify NEXT_PUBLIC_APP_URL and BETTER_AUTH_URL match public origin |
| Database connection failed | Check DATABASE_URL and ensure PostgreSQL is running |
| WebSocket connection failed | Verify SOCKET_SERVER_URL is accessible |
| Image pull fails | Use --no-pull flag or check Docker registry access |
| Autoscaling not working | Ensure metrics-server is installed in cluster |
Log Locations
| Environment | Log Command |
|---|---|
| Docker Compose | docker compose logs -f [service] |
| Kubernetes | kubectl logs -n sim -l app=sim |
| Helm | helm status sim -n sim |
Next Steps
- Configuration Reference - Complete environment variable documentation
- Local Model Setup - Ollama and vLLM configuration
- Monitoring Setup - Prometheus and Grafana integration
- Security Hardening - Production security recommendations
Source: https://github.com/simstudioai/sim / Human Manual
Background Jobs and Background Processing
Related topics: Workflow Executor Engine, Deployment Guide
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Workflow Executor Engine, Deployment Guide
Background Jobs and Background Processing
Overview
The Sim platform uses a robust background job system to handle asynchronous, long-running, and resource-intensive operations outside the request-response cycle. This architecture enables workflows, schedules, webhooks, and other processing tasks to execute reliably without blocking user interactions.
The background processing system is designed around a job queue abstraction that supports multiple backend implementations, allowing the platform to scale horizontally and handle high-throughput scenarios with proper concurrency control.
Architecture
System Components
The background processing architecture consists of three main layers:
- Job Queue Interface - A unified abstraction for enqueuing, monitoring, and managing jobs
- Backend Implementations - Pluggable backends (database, trigger.dev) that handle actual job processing
- Job Handlers - Specific implementations for different job types (workflow, schedule, webhook, etc.)
graph TD
subgraph "Job Producers"
API[API Request]
Schedule[Scheduled Trigger]
Webhook[Webhook Trigger]
Table[Table Cell Execution]
end
subgraph "Job Queue Interface"
Queue[JobQueue API]
Enqueue[enqueue / batchEnqueue]
GetJob[getJob / startJob]
Cancel[cancelJob]
end
subgraph "Backends"
DB[(Database Backend)]
TD[Trigger.dev Backend]
end
subgraph "Job Handlers"
WE[Workflow Execution]
SE[Schedule Execution]
HE[Webhook Execution]
KC[Knowledge Connector Sync]
RE[Resume Execution]
end
API --> Queue
Schedule --> Queue
Webhook --> Queue
Table --> Queue
Queue --> Enqueue
Queue --> GetJob
Queue --> Cancel
Enqueue --> DB
Enqueue --> TD
DB --> WE
DB --> SE
DB --> HE
DB --> KC
DB --> RE
TD --> WEBackend Types
The system supports two backend implementations for job queues:
| Backend Type | Identifier | Description |
|---|---|---|
| Database | database | Built-in queue using database storage, suitable for self-hosted deployments |
| Trigger.dev | trigger-dev | External job processing service for cloud deployments |
Sources: apps/sim/lib/core/async-jobs/types.ts:40
Job Queue Interface
Core Interface Methods
The JobQueue interface provides a unified API for all job operations:
| Method | Parameters | Returns | Description | |
|---|---|---|---|---|
enqueue | type: JobType, payload: TPayload, options?: EnqueueOptions | Promise<string> | Add a single job to the queue | |
batchEnqueue | type: JobType, items: Array<{payload, options?}> | Promise<string[]> | Add multiple jobs as a batch | |
getJob | jobId: string | `Promise<Job \ | null>` | Retrieve job by ID |
startJob | jobId: string | Promise<void> | Mark job as started/processing | |
completeJob | jobId: string, output: unknown | Promise<void> | Mark job as completed with output | |
markJobFailed | jobId: string, error: string | Promise<void> | Mark job as failed with error | |
cancelJob | jobId: string | Promise<void> | Request job cancellation |
Sources: apps/sim/lib/core/async-jobs/types.ts:1-30
Job Configuration Options
Jobs can be configured with the following options:
| Option | Type | Description |
|---|---|---|
metadata | object | Additional metadata including workflow ID, workspace ID, and correlation data |
concurrencyKey | string | Key for per-key concurrency limiting |
concurrencyLimit | number | Maximum concurrent jobs for this key (database backend only) |
tags | string[] | Tags for categorization (e.g., tableId:xxx, rowId:yyy) |
runner | function | Custom job body for database backend when no external worker exists |
Sources: apps/sim/lib/core/async-jobs/types.ts:55-75
Job Types
The platform defines several job types for different processing scenarios:
export type JobType =
| 'workflow'
| 'workflow-group-cell'
| 'schedule'
| 'webhook'
| 'knowledge-connector-sync'
| 'resume'
Workflow Execution (`workflow`)
The core job type for executing workflows. Handles the full lifecycle from triggering to completion.
Handler: executeWorkflowJob
Payload includes:
workflowId- Target workflow identifierworkspaceId- Workspace containing the workflowinput- Input data for the workflowexecutionId- Unique execution identifiersource- Trigger source (e.g.,'api','table','schedule')
Sources: apps/sim/background/workflow-execution.ts
Workflow Group Cell (`workflow-group-cell`)
Executes workflow groups for table rows. Supports high-concurrency table-based workflow execution.
Handler: executeWorkflowGroupCellJob
Key Features:
- Table concurrency limiting (
TABLE_CONCURRENCY_LIMIT) - Per-row execution tracking
- Correlation with table and row identifiers
sequenceDiagram
participant Table as Table Scheduler
participant Queue as Job Queue
participant Worker as Cell Worker
Table->>Queue: batchEnqueue(workflow-group-cell, runs[])
Queue-->>Table: jobIds[]
Worker->>Queue: getJob(jobId)
Worker->>Worker: executeWorkflowGroupCellJob(payload)
Worker->>Queue: completeJob(jobId, output)Sources: apps/sim/lib/table/workflow-columns.ts:30-60
Schedule Execution (`schedule`)
Handles time-based workflow triggers defined by schedules.
Handler: executeScheduleJob
Sources: apps/sim/background/schedule-execution.ts
Webhook Execution (`webhook`)
Processes incoming webhook payloads and triggers associated workflows.
Handler: executeWebhookJob
Sources: apps/sim/background/webhook-execution.ts
Knowledge Connector Sync (`knowledge-connector-sync`)
Synchronizes data between external knowledge sources and the platform.
Handler: executeKnowledgeConnectorSyncJob
Sources: apps/sim/background/knowledge-connector-sync.ts
Resume Execution (`resume`)
Resumes previously paused or checkpointed workflow executions.
Handler: executeResumeJob
Sources: apps/sim/background/resume-execution.ts
Concurrency Control
Table Concurrency
For table-based workflow execution, the system enforces a concurrency limit to prevent resource exhaustion:
const TABLE_CONCURRENCY_LIMIT = 5
Jobs for the same table are grouped by concurrencyKey to ensure ordered processing while allowing parallel execution across different tables.
Sources: apps/sim/lib/table/workflow-columns.ts:50
Job Tagging
Jobs are tagged for tracking and monitoring:
| Tag Format | Example | Purpose |
|---|---|---|
tableId:{id} | tableId:abc123 | Identifies the source table |
rowId:{id} | rowId:row456 | Identifies the source row |
group:{id} | group:grp789 | Identifies the workflow group |
Sources: apps/sim/lib/table/workflow-columns.ts:55
Job Correlation and Tracing
Metadata Structure
Each job carries correlation metadata for distributed tracing:
interface CorrelationData {
executionId: string
requestId: string
source: 'workflow' | 'api' | 'schedule' | 'webhook' | 'table'
workflowId: string
triggerType: string
}
Request ID Format
Request IDs follow a consistent naming convention based on job type:
| Job Type | Request ID Format | Example |
|---|---|---|
| Workflow Group Cell | wfgrp-{executionId} | wfgrp-exec-123 |
Sources: apps/sim/lib/table/workflow-columns.ts:43
Job Lifecycle
State Transitions
stateDiagram-v2
[*] --> Queued: enqueue()
Queued --> Processing: startJob()
Processing --> Completed: completeJob()
Processing --> Failed: markJobFailed()
Processing --> Cancelled: cancelJob()
Queued --> Cancelled: cancelJob()
Cancelled --> [*]
Completed --> [*]
Failed --> [*]Status Definitions
| Status | Description |
|---|---|
pending | Job is queued but not yet picked up |
queued | Job is in the queue (alternative state) |
processing | Job is currently being executed |
completed | Job finished successfully |
failed | Job encountered an error |
cancelled | Job was cancelled before completion |
Error Handling and Cancellation
Best-Effort Cancellation
The cancelJob method implements best-effort cancellation:
- Unknown or already-completed jobs resolve quietly (no error thrown)
- Underlying provider rejections fail loudly to alert operators
/**
* Request cancellation of a queued or running job. Best-effort: backends should
* fail loudly if the underlying provider rejects, but a missing/unknown jobId
* should resolve quietly so callers can drive cancel from possibly-stale state.
*/
cancelJob(jobId: string): Promise<void>
Sources: apps/sim/lib/core/async-jobs/types.ts:28-33
Runner Functions
For the database backend, jobs include a runner function that is executed as a fire-and-forget IIFE (Immediately Invoked Function Expression). This allows the database row to drive the job through processing states:
runner?: <TPayload>(
payload: TPayload,
signal: AbortSignal
) => Promise<void>
The AbortSignal is driven by cancelJob, enabling graceful shutdown of cancelled jobs.
Sources: apps/sim/lib/core/async-jobs/types.ts:62-69
Batch Enqueue Operations
Batch Processing Flow
graph LR
A[Pending Runs] --> B[Map to Job Items]
B --> C{Backend Type}
C -->|Database| D[Single Multi-Row INSERT]
C -->|Trigger.dev| E[tasks.batchTrigger]
D --> F[Return jobIds in input order]
E --> F
F --> G[Promise.allSettled Fallback]
G -->|If batch fails| H[Individual Enqueue]The batch enqueue operation:
- Maps pending runs to job items with full metadata and options
- Attempts batch enqueue via the queue backend
- Falls back to individual enqueue if batch fails
- Returns one jobId per item in input order
Sources: apps/sim/lib/table/workflow-columns.ts:60-75
Integration with SDKs
Python SDK Async Execution
The Python SDK provides async execution support through the execute_workflow method with async_execution=True:
result = client.execute_workflow(
'workflow-id',
{'message': 'Hello'},
async_execution=True
)
# Returns AsyncExecutionResult with job_id and status_url
TypeScript SDK Async Execution
Similarly, the TypeScript SDK supports async execution:
const result = await client.executeWorkflow('workflow-id', { data: 'input' }, {
asyncExecution: true
});
// Returns AsyncExecutionResult with jobId
Job status can be monitored via getJobStatus(jobId).
Sources: packages/python-sdk/README.md, packages/ts-sdk/README.md
Testing Support
The testing utilities in packages/testing provide factories for creating workflow test fixtures:
createWorkflowState()- Base workflow statecreateLinearWorkflow(n)- Sequential workflow with n blockscreateBranchingWorkflow()- Conditional branching workflow
Sources: packages/testing/src/factories/workflow.factory.ts
Summary
The background job system in Sim provides:
- Unified Queue Interface - Consistent API across different job types and backends
- Multiple Backend Support - Database for self-hosted, Trigger.dev for cloud deployments
- Rich Job Metadata - Correlation data, tags, and concurrency controls for observability
- Reliable Execution - State management, cancellation support, and retry capabilities
- Batch Operations - Efficient bulk enqueue with fallback to individual operations
- SDK Integration - Async execution support in both Python and TypeScript SDKs
Sources: [apps/sim/lib/core/async-jobs/types.ts:40]()
Doramagic Pitfall Log
Source-linked risks stay visible on the manual page so the preview does not read like a recommendation.
First-time setup may fail or require extra isolation and rollback planning.
Users may get misleading failures or incomplete behavior unless configuration is checked carefully.
The project should not be treated as fully validated until this signal is reviewed.
The project should not be treated as fully validated until this signal is reviewed.
Doramagic Pitfall Log
Doramagic extracted 16 source-linked risk signals. Review them before installing or handing real data to the project.
1. Installation risk: Open-source general purpose agent with built-in MCPToolkit support
- Severity: medium
- Finding: Open-source general purpose agent with built-in MCPToolkit support 15 May 2025 · ... MCP for local-agent workflows · r/LocalLLaMA - A visual ... r/commandline - CLI tool to simplify open source monitoring agent installation.
- User impact: First-time setup may fail or require extra isolation and rollback planning.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: social_signal:reddit | ssig_7a250aac9fa1441c8186a7b73d669d8f | https://www.reddit.com/r/LocalLLaMA/comments/1kn8m8t/opensource_general_purpose_agent_with_builtin/ | Open-source general purpose agent with built-in MCPToolkit support
2. Configuration risk: Configuration risk needs validation
- Severity: medium
- Finding: Configuration risk is backed by a source signal: Configuration risk needs validation. Treat it as a review item until the current version is checked.
- User impact: Users may get misleading failures or incomplete behavior unless configuration is checked carefully.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: capability.host_targets | art_4c769793e2db44b09c3ad8f55672a2ea | https://github.com/simstudioai/sim#readme | host_targets=cursor
3. Capability assumption: README/documentation is current enough for a first validation pass.
- Severity: medium
- Finding: README/documentation is current enough for a first validation pass.
- User impact: The project should not be treated as fully validated until this signal is reviewed.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: capability.assumptions | art_4c769793e2db44b09c3ad8f55672a2ea | https://github.com/simstudioai/sim#readme | README/documentation is current enough for a first validation pass.
4. Project risk: v0.6.63
- Severity: medium
- Finding: Project risk is backed by a source signal: v0.6.63. Treat it as a review item until the current version is checked.
- User impact: The project should not be treated as fully validated until this signal is reviewed.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/simstudioai/sim/releases/tag/v0.6.63
5. Project risk: v0.6.65
- Severity: medium
- Finding: Project risk is backed by a source signal: v0.6.65. Treat it as a review item until the current version is checked.
- User impact: The project should not be treated as fully validated until this signal is reviewed.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/simstudioai/sim/releases/tag/v0.6.65
6. Project risk: v0.6.67
- Severity: medium
- Finding: Project risk is backed by a source signal: v0.6.67. Treat it as a review item until the current version is checked.
- User impact: The project should not be treated as fully validated until this signal is reviewed.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/simstudioai/sim/releases/tag/v0.6.67
7. Project risk: v0.6.73
- Severity: medium
- Finding: Project risk is backed by a source signal: v0.6.73. Treat it as a review item until the current version is checked.
- User impact: The project should not be treated as fully validated until this signal is reviewed.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/simstudioai/sim/releases/tag/v0.6.73
8. Maintenance risk: v0.6.71
- Severity: medium
- Finding: Maintenance risk is backed by a source signal: v0.6.71. Treat it as a review item until the current version is checked.
- User impact: Users cannot judge support quality until recent activity, releases, and issue response are checked.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/simstudioai/sim/releases/tag/v0.6.71
9. Maintenance risk: Maintainer activity is unknown
- Severity: medium
- Finding: Maintenance risk is backed by a source signal: Maintainer activity is unknown. Treat it as a review item until the current version is checked.
- User impact: Users cannot judge support quality until recent activity, releases, and issue response are checked.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: evidence.maintainer_signals | art_4c769793e2db44b09c3ad8f55672a2ea | https://github.com/simstudioai/sim#readme | last_activity_observed missing
10. Security or permission risk: no_demo
- Severity: medium
- Finding: no_demo
- User impact: The project may affect permissions, credentials, data exposure, or host boundaries.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: downstream_validation.risk_items | art_4c769793e2db44b09c3ad8f55672a2ea | https://github.com/simstudioai/sim#readme | no_demo; severity=medium
11. Security or permission risk: No sandbox install has been executed yet; downstream must verify before user use.
- Severity: medium
- Finding: No sandbox install has been executed yet; downstream must verify before user use.
- User impact: The project may affect permissions, credentials, data exposure, or host boundaries.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: risks.safety_notes | art_4c769793e2db44b09c3ad8f55672a2ea | https://github.com/simstudioai/sim#readme | No sandbox install has been executed yet; downstream must verify before user use.
12. Security or permission risk: no_demo
- Severity: medium
- Finding: no_demo
- User impact: The project may affect permissions, credentials, data exposure, or host boundaries.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: risks.scoring_risks | art_4c769793e2db44b09c3ad8f55672a2ea | https://github.com/simstudioai/sim#readme | no_demo; severity=medium
Source: Doramagic discovery, validation, and Project Pack records
Community Discussion Evidence
These external discussion links are review inputs, not standalone proof that the project is production-ready.
Count of project-level external discussion links exposed on this manual page.
Open the linked issues or discussions before treating the pack as ready for your environment.
Community Discussion Evidence
Doramagic exposes project-level community discussion separately from official documentation. Review these links before using sim with real data or production workflows.
- Unable to Access Files from Private GitHub Repository in Self-Hosted Set - github / github_issue
- v0.6.73 - github / github_release
- v0.6.72 - github / github_release
- v0.6.71 - github / github_release
- v0.6.69 - github / github_release
- v0.6.68 - github / github_release
- v0.6.67 - github / github_release
- v0.6.66 - github / github_release
- v0.6.65 - github / github_release
- v0.6.64 - github / github_release
- v0.6.63 - github / github_release
- Morgan (@morganlinton) on X - x / searxng_indexed
Source: Project Pack community evidence and pitfall evidence