Doramagic Project Pack · Human Manual

sim

Sim Studio provides a modern approach to workflow automation by integrating large language models (LLMs) directly into the automation pipeline. The platform supports both cloud-hosted and ...

Project Introduction

Related topics: Technology Stack, Architecture Overview

Section Related Pages

Continue reading this section for the full explanation and source context.

Section AI-Native Automation

Continue reading this section for the full explanation and source context.

Section Multiple SDK Support

Continue reading this section for the full explanation and source context.

Section Extensible Architecture

Continue reading this section for the full explanation and source context.

Related topics: Technology Stack, Architecture Overview

Project Introduction

Sim is an AI-powered workflow automation platform that enables users to build, deploy, and manage intelligent automation pipelines. The platform combines visual workflow design with AI capabilities, allowing teams to create sophisticated automation workflows without extensive coding knowledge.

Overview

Sim Studio provides a modern approach to workflow automation by integrating large language models (LLMs) directly into the automation pipeline. The platform supports both cloud-hosted and self-hosted deployment options, giving organizations flexibility in how they manage their automation infrastructure.

The project is structured as a monorepo containing multiple packages:

PackagePurpose
apps/simMain web application
packages/python-sdkPython SDK for programmatic access
packages/ts-sdkTypeScript SDK for programmatic access
scriptsAutomation and utility scripts

Sources: README.md:1-20

Key Features

AI-Native Automation

Sim leverages AI capabilities throughout the platform, enabling intelligent decision-making within workflows. The system supports integration with various AI providers including Ollama and vLLM for local model deployment.

Sources: README.md:45-48

Multiple SDK Support

The platform provides official SDKs for both Python and TypeScript ecosystems, enabling developers to:

  • Execute workflows programmatically
  • Manage workflow deployments
  • Monitor execution status and results
  • Handle async job execution with polling

Sources: packages/python-sdk/README.md:1-50 Sources: packages/ts-sdk/README.md:1-40

Extensible Architecture

Sim includes support for various webhook providers and integrations:

ProviderIntegration Type
WebflowCMS Webhook
TypeformForm Response
GongCall Recording
VercelDeployment Events
AshbyATS Events
GrainMeeting Recording
SalesforceCRM Events

Sources: apps/sim/lib/webhooks/providers/webflow.ts:1-50 Sources: apps/sim/lib/webhooks/providers/typeform.ts:1-40

Architecture Overview

graph TD
    A[Client Application] --> B[Next.js Web App]
    B --> C[Workflow Engine]
    C --> D[Sandbox Executor]
    C --> E[Webhook System]
    D --> F[AI Providers]
    E --> G[External Services]
    F --> H[Ollama / vLLM]
    F --> I[Cloud LLM APIs]

Core Components

#### Web Application (apps/sim)

The main React-based web application built with Next.js that provides:

  • Visual workflow editor
  • Block-based workflow construction
  • Trigger configuration
  • Execution monitoring
  • Workspace management

The application uses TypeScript with strict type checking enabled via tsc --noEmit.

Sources: apps/sim/package.json:1-30

#### Workflow Engine

The workflow engine handles:

  • Workflow parsing and validation
  • Execution scheduling
  • State management
  • Error handling and retries

#### Sandbox Executor

Sandboxed execution environment for running workflow blocks safely with resource isolation.

Sources: apps/sim/package.json:8-12

Deployment Options

Sim supports three primary self-hosted deployment methods.

Comparison Matrix

MethodDocker RequiredManual SetupUse Case
NPM PackageYesMinimalQuick local testing
Docker ComposeYesModerateProduction deployments
Manual SetupNoExtensiveCustom infrastructure

Sources: README.md:25-50

Option 1: NPM Package (Quick Start)

The fastest way to get started locally:

npx simstudio

This command pulls the latest Docker images and starts Sim at http://localhost:3000.

Options:

FlagDescriptionDefault
-p, --port <port>Port to run Sim on3000
--no-pullSkip pulling latest Docker imagesfalse

Sources: README.md:25-32

Option 2: Docker Compose

For production-ready deployments with persistent storage:

git clone https://github.com/simstudioai/sim.git && cd sim
docker compose -f docker-compose.prod.yml up -d

Sources: README.md:34-38

Option 3: Manual Setup

For custom infrastructure configurations. Requires manual installation of all dependencies.

Sources: README.md:40-45

System Requirements

Hardware Requirements

ComponentMinimumRecommended
CPU2 cores4+ cores
RAM4 GB8+ GB
Disk10 GB20+ GB

Software Requirements

SoftwareVersionNotes
DockerLatestRequired for NPM and Docker Compose methods
BunLatestRequired for manual setup
Node.jsv20+Required for manual setup
PostgreSQL12+Must include pgvector extension

Sources: README.md:40-45

Database Configuration

PostgreSQL with pgvector is required for vector storage capabilities:

docker run --name simstudio-db \
  -e POSTGRES_PASSWORD=your_password \
  -e POSTGRES_DB=simstudio \
  -p 5432:5432 -d \
  pgvector/pgvector:pg16

Sources: README.md:50-55

SDK Data Structures

WorkflowExecutionResult

@dataclass
class WorkflowExecutionResult:
    success: bool
    output: Optional[Any] = None
    error: Optional[str] = None
    logs: Optional[list] = None
    metadata: Optional[Dict[str, Any]] = None
    trace_spans: Optional[list] = None
    total_duration: Optional[float] = None

Sources: packages/python-sdk/README.md:80-90

WorkflowStatus

@dataclass
class WorkflowStatus:
    is_deployed: bool
    deployed_at: Optional[str] = None
    needs_redeployment: bool = False

Sources: packages/python-sdk/README.md:100-105

RateLimitInfo

@dataclass
class RateLimitInfo:
    limit: int
    remaining: int
    reset: int
    retry_after: Optional[int] = None

Sources: packages/python-sdk/README.md:125-130

Development Workflow

Code Quality Tools

The project enforces code quality through Biome:

CommandPurpose
bun run lintFormat and lint with auto-fix
bun run lint:checkCheck linting without auto-fix
bun run formatFormat code files
bun run format:checkCheck formatting without changes
bun run type-checkRun TypeScript type checking

Sources: apps/sim/package.json:15-22

Testing

Tests are run using Vitest:

CommandPurpose
bun run testRun tests once
bun run test:watchRun tests in watch mode
bun run test:coverageGenerate coverage report

Sources: apps/sim/package.json:14-17

Local Model Support

Sim supports self-hosted AI models through two providers:

Ollama

Integration with Ollama for running local LLMs including Llama 2, Mistral, and other open-source models.

vLLM

Integration with vLLM for high-performance inference serving.

Sources: README.md:45-48

Documentation Generation

The project includes automated documentation generation:

bun run generate-docs

This script preserves manual content markers within the codebase for custom documentation sections.

Sources: scripts/README.md:1-30

Community and Support

ResourceLink
Documentationhttps://docs.sim.ai
Discord CommunityDiscord
Twitter@simdotai
DeepWikiDeepWiki

Sources: README.md:1-15

License

The project is licensed under Apache-2.0.

Sources: packages/python-sdk/README.md:70 Sources: packages/ts-sdk/README.md:45

Sources: [README.md:1-20]()

Technology Stack

Related topics: Project Introduction, Deployment Guide

Section Related Pages

Continue reading this section for the full explanation and source context.

Section Bun

Continue reading this section for the full explanation and source context.

Section Node.js Requirements

Continue reading this section for the full explanation and source context.

Section Next.js Framework

Continue reading this section for the full explanation and source context.

Related topics: Project Introduction, Deployment Guide

Technology Stack

Overview

The Sim platform is built on a modern, polyglot technology stack designed to support both frontend and backend development with a focus on developer productivity, type safety, and scalable deployment options. The system leverages TypeScript as the primary language for the core application, Python for the SDK ecosystem, and Docker for containerization and self-hosted deployments.

Core Runtime Environment

Bun

Bun serves as the primary package manager and runtime for the project. All build scripts, dependency installations, and development workflows are configured to use Bun workspaces for efficient monorepo management.

bun install
bun run build
bun run test

Sources: apps/sim/package.json:11-30

Node.js Requirements

The main application requires Node.js v20+ for runtime compatibility. The TypeScript SDK specifies Node.js 18+ as the minimum requirement.

ComponentMinimum VersionRecommended Version
Main App (apps/sim)Node.js v20+Latest LTS
TypeScript SDKNode.js 18+Node.js 20+
Python SDKPython 3.8+Python 3.11+

Sources: README.md:35-45, packages/ts-sdk/README.md:42

Frontend Architecture

Next.js Framework

The main Sim application is built on Next.js, providing server-side rendering, API routes, and static generation capabilities.

Build Configuration:

{
  "build": "bun run build:sandbox-bundles && NODE_OPTIONS='--max-old-space-size=8192' next build",
  "start": "next start"
}

Sources: apps/sim/package.json:15-16

Testing Framework

ToolPurposeCommand
VitestUnit and integration testingbun run test
Vitest (watch mode)Development testingbun run test:watch
Vitest (coverage)Coverage reportsbun run test:coverage

Sources: apps/sim/package.json:19-21

Code Quality Tools

ToolPurposeCommands
BiomeLinting and formattinglint, lint:check, format, format:check
TypeScript CompilerType checkingtype-check

Biome is configured for both linting (with unsafe auto-fixes) and code formatting:

bun run lint        # Apply lint fixes
bun run lint:check # Check only
bun run format     # Apply formatting
bun run format:check

Sources: apps/sim/package.json:22-25

Backend and Database

PostgreSQL with pgvector

The platform requires PostgreSQL 12+ with the pgvector extension for vector similarity search capabilities. This enables knowledge base and document embedding features.

Docker Setup:

docker run --name simstudio-db \
  -e POSTGRES_PASSWORD=your_password \
  -e POSTGRES_DB=simstudio \
  -p 5432:5432 \
  -d pgvector/pgvector:pg16

Sources: README.md:45-50

Drizzle ORM

Database operations are managed through Drizzle ORM, configured via drizzle.config.ts. This provides type-safe database queries and migrations.

import { defineConfig } from 'drizzle-kit'

Sources: packages/db/drizzle.config.ts

Document Processing

OCR Integration

The document processor integrates with multiple OCR providers for extracting content from PDFs and images:

ProviderConfigurationTimeout
Mistral OCR APIAPI Key + Endpoint30 seconds
Azure Mistral OCRAPI Key + Endpoint + Model30 seconds

Sources: apps/sim/lib/knowledge/documents/document-processor.ts:1-80

The OCR system uses:

  • Native fetch API for HTTP requests
  • AbortController for timeout management
  • Base64 encoding for file uploads

SDK Ecosystem

TypeScript SDK

The TypeScript SDK (packages/ts-sdk) provides programmatic access to Sim features:

RequirementVersion
Node.js18+
TypeScript5.0+

Development Commands:

bun run test    # Run tests
bun run build  # Compile to dist/
bun run dev    # Development mode with auto-rebuild

Sources: packages/ts-sdk/README.md:1-45

Python SDK

The Python SDK (packages/python-sdk) offers Python integration:

RequirementVersion
Python3.8+
requests>= 2.25.0

Code Quality Tools:

black simstudio/           # Code formatting
flake8 simstudio/          # Linting
mypy simstudio/            # Type checking
isort simstudio/           # Import sorting

Sources: packages/python-sdk/README.md:1-50

Security Infrastructure

Input Validation

The platform implements comprehensive input validation for security:

  • Enum Validation: Validates values against allowed lists
  • Hostname Validation: Prevents SSRF attacks by checking for private IPs, localhost, and reserved addresses
  • Proxy URL Validation: Secure proxy configuration validation

Sources: apps/sim/lib/core/security/input-validation.ts:1-100

Deployment Options

Docker Containerization

Sim supports multiple deployment scenarios:

graph TD
    A[Sim Deployment Options] --> B[Docker (NPM Package)]
    A --> C[Docker Compose]
    A --> D[Manual Setup]
    
    B --> B1[npx simstudio]
    C --> C1[docker compose up]
    D --> D1[Bun + PostgreSQL]

Sources: README.md:25-50

Local Model Support

The platform supports self-hosted AI models through:

RuntimeDescription
OllamaLocal model inference
vLLMHigh-performance LLM serving

Realtime Application

The apps/realtime package provides WebSocket-based communication features with its own independent package.json configuration.

Sources: apps/realtime/package.json

Load Testing Infrastructure

The project includes Artillery-based load testing for workflow performance validation:

ScriptPurpose
load:workflow:wavesWave-based load testing
load:workflow:isolationWorkspace isolation testing

Configuration Options:

Environment VariableDefaultDescription
WAVE_ONE_DURATION60Wave 1 duration in seconds
WAVE_ONE_RATE10Wave 1 request rate
WORKSPACE_A_WEIGHT8Workspace A load weight
WORKSPACE_B_WEIGHT1Workspace B load weight

Sources: apps/sim/package.json:8-14

Webhook Integrations

The platform provides webhook providers for third-party integrations:

ProviderPurpose
GongMeeting/call automation
VercelDeployment events
TypeformForm responses
WebflowCMS events
WhatsAppMessaging events

Each provider implements signature verification for security:

verifyAuth: createHmacVerifier({
  configKey: 'secret',
  headerName: 'Provider-Signature',
  validateFn: validateProviderSignature,
  providerLabel: 'ProviderName',
})

Sources: apps/sim/lib/webhooks/providers/gong.ts, apps/sim/lib/webhooks/providers/vercel.ts, apps/sim/lib/webhooks/providers/typeform.ts, apps/sim/lib/webhooks/providers/webflow.ts, apps/sim/lib/webhooks/providers/whatsapp.ts

Architecture Diagram

graph TB
    subgraph "Client Layer"
        WebApp[Web Application<br/>Next.js]
        TS_SDK[TypeScript SDK<br/>Node.js 18+]
    end
    
    subgraph "Runtime"
        Bun[Bun Runtime<br/>Workspaces]
        Node[Node.js v20+]
    end
    
    subgraph "Backend Services"
        API[API Routes]
        Webhooks[Webhook Providers]
        Security[Input Validation]
    end
    
    subgraph "Data Layer"
        Postgres[PostgreSQL + pgvector<br/>Drizzle ORM]
        Knowledge[Document Processor<br/>OCR Integration]
    end
    
    subgraph "Deployment"
        Docker[Docker Container]
        Ollama[Ollama]
        VLLM[vLLM]
    end
    
    WebApp --> API
    TS_SDK --> API
    API --> Postgres
    API --> Knowledge
    WebApp --> Webhooks
    Security --> API
    Docker --> Postgres
    Ollama --> API
    VLLM --> API

Summary Table

CategoryTechnologyVersion/Notes
RuntimeBunWorkspaces for monorepo
RuntimeNode.jsv20+ for main app
RuntimePython3.8+ for Python SDK
FrameworkNext.jsFull-stack React framework
DatabasePostgreSQL12+ with pgvector
ORMDrizzleType-safe queries
TestingVitestUnit and integration tests
LintingBiomeFast JS/TS linter
OCRMistral/AzureDocument processing
SDKsTypeScript/PythonMulti-language support
DeploymentDockerSelf-hosted option
AI RuntimeOllama/vLLMLocal model support

Sources: [apps/sim/package.json:11-30]()

Architecture Overview

Related topics: Workflow Executor Engine, Workflow Blocks System

Section Related Pages

Continue reading this section for the full explanation and source context.

Section Block Registry System

Continue reading this section for the full explanation and source context.

Section Trigger System

Continue reading this section for the full explanation and source context.

Section Workflow Engine

Continue reading this section for the full explanation and source context.

Related topics: Workflow Executor Engine, Workflow Blocks System

Architecture Overview

Sim is an open-source platform for building AI agents and orchestrating agentic workflows. It connects over 1,000 integrations and LLMs to enable sophisticated automation scenarios. The platform is built with a modular architecture centered around blocks, workflows, triggers, and an execution engine.

High-Level Architecture

The Sim platform follows a layered architecture:

graph TD
    subgraph "Presentation Layer"
        UI[Next.js Application]
    end
    
    subgraph "API Layer"
        API[API Routes]
        Contracts[Contract Types]
    end
    
    subgraph "Workflow Engine"
        WE[Workflow Engine]
        Diff[Diff Engine]
        Registry[Block Registry]
    end
    
    subgraph "Execution Layer"
        Executor[Executor]
        Sandbox[Sandbox Runner]
    end
    
    subgraph "Integration Layer"
        Webhooks[Webhook Providers]
        Triggers[Trigger System]
        Tools[Tool System]
    end
    
    UI --> API
    API --> Contracts
    Contracts --> WE
    WE --> Registry
    WE --> Diff
    Registry --> Executor
    Executor --> Sandbox
    Webhooks --> Triggers
    Triggers --> WE

Core Components

Block Registry System

The Block Registry is the central component that manages all available blocks in the system. Blocks are the fundamental building units of workflows, representing discrete operations like data transformation, API calls, or AI interactions.

Key Files:

Registry Functions:

FunctionPurpose
getBlock(type)Retrieve a specific block by type identifier
getAllBlocks()Get all registered blocks
getAllBlockTypes()Get list of all block type identifiers
getBlockByToolName(name)Find block by associated tool name
getBlocksByCategory(category)Filter blocks by category
isValidBlockType(type)Validate if a block type exists
registryThe underlying registry data structure

Sources: apps/sim/blocks/index.ts

Block Configuration Interface:

The registry is mocked in tests to return null or empty values for blocks, indicating dynamic resolution at runtime:

vi.mock('@/blocks', () => ({
  getBlock: () => null,
  getAllBlocks: () => ({}),
  getAllBlockTypes: () => [],
  getBlockByToolName: () => null,
  getBlocksByCategory: () => [],
  isValidBlockType: () => false,
  registry: {},
}))

Sources: apps/sim/lib/workflows/diff/diff-engine.test.ts:1-20

Trigger System

The Trigger System manages how workflows are initiated. Triggers can be manual, API-based, chat-based, or event-driven.

Trigger Types:

TypeIdentifierDescription
ManualTRIGGER_TYPES.STARTUser-initiated workflow start
APITRIGGER_TYPES.APIProgrammatic workflow invocation
ChatTRIGGER_TYPES.CHATChat-triggered workflows
StarterTRIGGER_TYPES.STARTERLegacy starter block support

Sources: apps/sim/lib/workflows/triggers/triggers.ts

Trigger Reference Alias Map:

The system maps reference aliases to concrete trigger block types:

export const TRIGGER_REFERENCE_ALIAS_MAP = {
  start: TRIGGER_TYPES.START,
  api: TRIGGER_TYPES.API,
  chat: TRIGGER_TYPES.CHAT,
  manual: TRIGGER_TYPES.START,
} as const

TriggerUtils Class:

The TriggerUtils class provides static methods for trigger identification:

export class TriggerUtils {
  static isTriggerBlock(block: { type: string; triggerMode?: boolean }): boolean {
    const blockConfig = getBlock(block.type)
    return (
      blockConfig?.category === 'triggers' ||
      block.triggerMode === true ||
      block.type === TRIGGER_TYPES.STARTER
    )
  }

  static isTriggerType(block: { type: string }, triggerType: TriggerType): boolean {
    return block.type === triggerType
  }
}

Sources: apps/sim/lib/workflows/triggers/triggers.ts

Workflow Engine

The Workflow Engine orchestrates the execution of blocks within a workflow context.

Workflow Components:

ComponentFilePurpose
Diff Enginelib/workflows/diff/diff-engine.test.tsComputes differences between workflow versions
Block Outputslib/workflows/blocks/block-outputsManages output data flow between blocks
Visibilitylib/workflows/subblocks/visibilityControls block visibility and canonical modes
Triggerslib/workflows/triggers/triggers.tsWorkflow initiation logic

Sources: apps/sim/lib/workflows/diff/diff-engine.test.ts

Workflow Registry Store:

The engine integrates with a workflow registry store for state management:

vi.mock('@/stores/workflows/registry/store', () => ({
  useWorkflowRegistry: {
    getState: () => ({
      activeWorkflowId: null,
    }),
  },
}))

Execution Engine

The Execution Engine is responsible for running workflows and blocks in a sandboxed environment.

Execution Constants:

ConstantPurpose
BLOCK_DIMENSIONSDefines minimum block height and dimensions
HANDLE_POSITIONSManages connection handle placement
isAnnotationOnlyBlock()Determines if a block is annotation-only
vi.mock('@/executor/constants', () => ({
  isAnnotationOnlyBlock: () => false,
  BLOCK_DIMENSIONS: { MIN_HEIGHT: 100 },
  HANDLE_POSITIONS: {},
}))

Sources: apps/sim/lib/workflows/diff/diff-engine.test.ts:1-20

API Contract System

The API Contract System provides type-safe API route definitions and type inference utilities.

Contract Type Generics:

TypeDescription
ContractParams<C>Extracts URL parameters from contract
ContractQuery<C>Extracts query parameters from contract
ContractBody<C>Extracts request body from contract
ContractHeaders<C>Extracts headers from contract
export type ContractParams<C extends AnyApiRouteContract> = C extends ApiRouteContract<
  infer TParams,
  ApiSchema | undefined,
  ApiSchema | undefined,
  ApiSchema | undefined,
  ResponseMode,
  ApiSchema | undefined
>
  ? EmptySchemaOutput<TParams>
  : undefined

Sources: apps/sim/lib/api/contracts/types.ts

Webhook Integration

Sim supports multiple webhook providers for event-driven workflow triggering.

Supported Providers:

ProviderFileKey Features
Gonglib/webhooks/providers/gong.tsAutomation rules, call data
Webflowlib/webhooks/providers/webflow.tsCollection filtering, CMS events
Typeformlib/webhooks/providers/typeform.tsForm responses, HMAC verification
Vercellib/webhooks/providers/vercel.tsDeployment events

Gong Provider Structure:

{
  eventType: 'gong.automation_rule',
  callId,
  metaData,
  parties: (callData?.parties as unknown[]) || [],
  context: (callData?.context as unknown[]) || [],
  trackers: (content?.trackers as unknown[]) || [],
  topics: (content?.topics as unknown[]) || [],
  highlights: (content?.highlights as unknown[]) || [],
}

Sources: apps/sim/lib/webhooks/providers/gong.ts

Webhook Event Filtering:

Providers implement event filtering logic:

shouldSkipEvent({ webhook, body, requestId, providerConfig }: EventFilterContext) {
  const configuredCollectionId = providerConfig.collectionId as string | undefined
  if (configuredCollectionId) {
    const obj = body as Record<string, unknown>
    const payload = obj.payload as Record<string, unknown> | undefined
    const payloadCollectionId = (payload?.collectionId ?? obj.collectionId) as string | undefined

    if (payloadCollectionId && payloadCollectionId !== configuredCollectionId) {
      return true
    }
  }
  return false
}

Sources: apps/sim/lib/webhooks/providers/webflow.ts

Pending Verification System:

Webhook verification is handled through a pending verification mechanism:

ProviderVerification Method
AshbyAlways valid
GrainGET/HEAD or POST without body
GenericGET/HEAD or POST without body
SalesforceGET/HEAD or POST without body
const pendingWebhookVerificationProbeMatchers: Record<
  string,
  PendingWebhookVerificationProbeMatcher
> = {
  ashby: ({ method, body }) => method === 'POST' && body?.action === 'ping',
  grain: ({ method, body }) =>
    method === 'GET' ||
    method === 'HEAD' ||
    (method === 'POST' && (!body || Object.keys(body).length === 0 || !body.type)),
  generic: ({ method, body }) =>
    method === 'GET' ||
    method === 'HEAD' ||
    (method === 'POST' && (!body || Object.keys(body).length === 0)),
  salesforce: ({ method, body }) =>
    method === 'GET' ||
    method === 'HEAD' ||
    (method === 'POST' && (!body || Object.keys(body).length === 0)),
}

Sources: apps/sim/lib/webhooks/pending-verification.ts

Data Flow

sequenceDiagram
    participant User
    participant API
    participant Registry
    participant Workflow
    participant Executor
    participant Sandbox

    User->>API: Trigger Workflow
    API->>Registry: Validate Block Types
    Registry-->>API: Block Configs
    API->>Workflow: Initialize Workflow
    Workflow->>Registry: Get Block Implementations
    Registry-->>Workflow: Blocks
    Workflow->>Executor: Execute Blocks
    Executor->>Sandbox: Run in Sandbox
    Sandbox-->>Executor: Results
    Executor-->>Workflow: Block Outputs
    Workflow-->>API: Workflow Complete
    API-->>User: Response

Block Execution Flow

graph TD
    Start[Workflow Start] --> Trigger{Trigger Type}
    
    Trigger -->|Manual| Manual[Manual Trigger]
    Trigger -->|API| API[API Trigger]
    Trigger -->|Chat| Chat[Chat Trigger]
    
    Manual --> Validate{Validate Block Types}
    API --> Validate
    Chat --> Validate
    
    Validate -->|Valid| GetBlocks[Get Blocks from Registry]
    Validate -->|Invalid| Error[Error Handling]
    
    GetBlocks --> Execute[Execute Block]
    Execute --> Sandbox{Run in Sandbox?}
    
    Sandbox -->|Yes| Sandboxed[Sandbox Execution]
    Sandbox -->|No| Direct[Direct Execution]
    
    Sandboxed --> Output[Block Output]
    Direct --> Output
    
    Output --> NextBlock{Next Block?}
    NextBlock -->|Yes| Execute
    NextBlock -->|No| Complete[Workflow Complete]

Type Safety

Sim leverages TypeScript's type system extensively for compile-time safety:

  1. API Contracts: Type-safe route definitions with generic type parameters
  2. Block Registry: Type-checked block retrieval and validation
  3. Trigger Classification: Type-safe trigger type checking
  4. Webhook Payloads: Typed webhook event data structures

Documentation Generation

The platform includes an automated documentation generator:

graph LR
    A[Block Files] --> B[Scan Directory]
    B --> C[Extract Metadata]
    C --> D[Generate Markdown]
    D --> E[Update meta.json]
    E --> F[Commit to Repo]

The generator is integrated into CI/CD and preserves manual content during regeneration.

Sources: scripts/README.md

Sources: [apps/sim/blocks/index.ts](https://github.com/simstudioai/sim/blob/main/apps/sim/blocks/index.ts)

Workflow Executor Engine

Related topics: Architecture Overview, Workflow Blocks System

Section Related Pages

Continue reading this section for the full explanation and source context.

Section Core Components

Continue reading this section for the full explanation and source context.

Section Block State Management

Continue reading this section for the full explanation and source context.

Section Responsibilities

Continue reading this section for the full explanation and source context.

Related topics: Architecture Overview, Workflow Blocks System

Workflow Executor Engine

The Workflow Executor Engine is the core runtime system responsible for executing workflows built in the Sim platform. It transforms serialized workflow definitions into executable execution plans, manages block-level execution with proper dependency resolution, and orchestrates complex control flow patterns including parallel execution and looping constructs.

Architecture Overview

The executor engine follows a layered architecture that separates concerns between DAG construction, execution planning, and runtime orchestration.

graph TD
    A[Workflow Definition] --> B[DAG Builder]
    B --> C[Execution Plan]
    C --> D[Execution Engine]
    D --> E[Block Executor]
    E --> F[Parallel Orchestrator]
    E --> G[Loop Orchestrator]
    D --> H[Trigger System]
    H --> I[Manual Triggers]
    H --> J[API Triggers]
    H --> K[Scheduled Triggers]

Core Components

ComponentFileResponsibility
DAG Builderexecutor/dag/builder.tsConverts workflow blocks into a directed acyclic graph
Execution Engineexecutor/execution/engine.tsCoordinates overall execution flow and state management
Block Executorexecutor/execution/executor.tsExecutes individual blocks and manages block states
Parallel Orchestratorexecutor/orchestrators/parallel.tsManages concurrent block execution
Loop Orchestratorexecutor/orchestrators/loop.tsHandles iterative block execution

Executor Context

The executor maintains a comprehensive context object that tracks the state of the entire workflow execution.

interface ExecutorContext {
  workflow: SerializedWorkflow
  blocks: SerializedBlock[]
  connections: SerializedConnection[]
  blockStates: Map<string, ExecutorBlockState>
  executedBlocks: Set<string>
  abortSignal?: AbortSignal
  workspaceId: string
  executionId: string
}

Block State Management

Each block maintains its execution state through the ExecutorBlockState interface:

interface ExecutorBlockState {
  output: Record<string, any>
  executed: boolean
  executionTime: number
}

Sources: packages/testing/src/factories/executor-context.factory.ts:1-80

The testing factory provides utilities for creating executor contexts with pre-configured blocks:

export function createExecutorContextWithBlocks(
  blockOutputs: Record<string, Record<string, any>>,
  options?: ExecutorContextFactoryOptions
): ExecutorContext

Sources: packages/testing/src/factories/executor-context.factory.ts:44-70

DAG Builder

The DAG (Directed Acyclic Graph) builder transforms the linear block definitions into a dependency graph that the execution engine can traverse.

Responsibilities

  • Parse workflow block definitions and connection specifications
  • Build adjacency lists representing block dependencies
  • Validate graph structure to ensure no cycles
  • Resolve input/output mappings between connected blocks
  • Generate execution order using topological sorting

Key Functions

FunctionPurpose
buildDAG(blocks, connections)Constructs the dependency graph
topologicalSort()Determines safe execution order
getDependencies(blockId)Retrieves all blocks that must execute first
getDependents(blockId)Finds blocks that depend on this block

Execution Engine

The execution engine is the central coordinator that manages the lifecycle of workflow execution from start to completion.

Execution Flow

sequenceDiagram
    participant Client
    participant Engine
    participant Executor
    participant Orchestrator
    participant Block

    Client->>Engine: execute(workflow, context)
    Engine->>Executor: prepare(workflow)
    Executor->>Engine: DAG Ready
    Engine->>Engine: determineStartBlocks()
    Engine->>Orchestrator: executeNextBatch()
    Orchestrator->>Block: execute(block)
    Block-->>Orchestrator: result
    Orchestrator->>Engine: blockComplete()
    Engine->>Engine: updateContext()
    Engine->>Orchestrator: executeNextBatch()
    Orchestrator-->>Engine: batchComplete
    Engine-->>Client: executionResult

Trigger Classification

The engine classifies workflow start conditions to determine execution entry points:

class TriggerClassifier {
  static isManualTrigger(block: { type: string; subBlocks?: any }): boolean
  static isApiTrigger(block: { type: string; subBlocks?: any }, isChildWorkflow?: boolean): boolean
}

Sources: apps/sim/lib/workflows/triggers/triggers.ts:1-60

Supported trigger types:

Trigger TypeDescriptionEntry Mode
INPUTForm or manual input triggerManual
MANUALExplicit manual executionManual
STARTNew unified start blockManual/API
APIAPI endpoint triggerAPI
STARTERLegacy starter blockManual/API based on startWorkflow value

Sources: apps/sim/lib/workflows/triggers/triggers.ts:1-75

Block Executor

The block executor handles the actual execution of individual workflow blocks, managing their lifecycle from initialization through completion.

Execution Pipeline

  1. Block Identification - Resolve block type and configuration
  2. Input Resolution - Collect outputs from dependent blocks
  3. Sandbox Preparation - Set up isolated execution environment
  4. Execution - Run the block's logic
  5. Output Capture - Collect and store block results
  6. State Update - Update executor context with results

Block States

stateDiagram-v2
    [*] --> Pending
    Pending --> Running: executionStart
    Running --> Completed: success
    Running --> Failed: error
    Running --> Cancelled: abortSignal
    Completed --> [*]
    Failed --> [*]
    Cancelled --> [*]

Async Tool Execution

For client-executable tools (running in the browser), the executor uses an async confirmation pattern:

async function reportCompletion(
  toolCallId: string,
  status: AsyncConfirmationStatus,
  message?: string,
  data?: AsyncCompletionData
): Promise<void>

Sources: apps/sim/lib/copilot/tools/client/run-tool-execution.ts:1-50

The executor reports completion via the /api/copilot/confirm endpoint, which persists the durable async-tool row and wakes server-side waiters.

Parallel Orchestrator

The parallel orchestrator manages concurrent execution of independent blocks, maximizing throughput while respecting dependency constraints.

Concurrency Model

graph LR
    A[Block A] --> C[Block C]
    B[Block B] --> C
    A --> D[Block D]
    B --> D
    C --> E[Block E]
    D --> E

Configuration Options

OptionTypeDefaultDescription
maxConcurrencynumber10Maximum parallel block executions
timeoutnumber300000Per-block execution timeout (ms)
failFastbooleantrueStop on first failure

Loop Orchestrator

The loop orchestrator handles iterative execution patterns, supporting standard loops and parallel-for constructs.

Loop Types

Loop TypeDescription
forStandard iteration over items
whileConditional iteration
parallel-forConcurrent iteration with result aggregation

Loop Configuration

interface LoopConfig {
  loopType: 'for' | 'while' | 'parallel-for'
  iterations?: number
  items?: any[]
  condition?: string
  maxConcurrency?: number
}

Sources: apps/realtime/src/database/operations.ts:1-50

Loop Block Structure

interface LoopBlock {
  id: string
  type: 'loop'
  config: LoopConfig
  nodes: SerializedBlock[]  // Blocks inside the loop
}

Default loop configuration:

const DEFAULT_LOOP_ITERATIONS = 10

Sources: apps/realtime/src/database/operations.ts:1-50

Execution Context Factory

The testing framework provides factory functions for creating executor contexts with predefined states:

Core Factory Functions

FunctionPurpose
createExecutorContext()Creates a base executor context
createExecutorContextWithBlocks()Creates context with pre-executed blocks
addBlockState()Adds block state to existing context (chainable)
createMinimalWorkflow()Creates a minimal workflow for testing

Usage Example

const ctx = createExecutorContextWithBlocks({
  'source-block': { value: 10, text: 'hello' },
  'other-block': { result: true }
})

Sources: packages/testing/src/factories/executor-context.factory.ts:44-70

Error Handling & Resilience

Cancellation Guards

The executor implements SQL-level guards to prevent race conditions during state updates:

const cancellationGuard = bypassStaleWorker ? undefined : { groupId, executionId }

Sources: apps/sim/lib/table/cell-write.ts:1-40

Abort Signal Support

All executor operations respect AbortSignal for graceful cancellation:

interface ExecutorContext {
  abortSignal?: AbortSignal
}

Skip Conditions

The executor skips writes under specific conditions to maintain consistency:

ConditionAction
Same execution already runningSkip queued stamp
Cancelled state with newer executionSkip group write
SQL guard conflictSkip with logging

Configuration Constants

ConstantValueDescription
BLOCK_DIMENSIONS.MIN_HEIGHT100Minimum block visual height
DEFAULT_LOOP_ITERATIONS10Default loop iteration count
DEFAULT_TIMEOUT300000Default block execution timeout

Sources: apps/sim/executor/constants

Extension Points

Custom Orchestrators

The orchestrator system is designed for extensibility. New orchestrators can be registered by implementing the Orchestrator interface:

interface Orchestrator {
  execute(blocks: SerializedBlock[], context: ExecutorContext): Promise<void>
  cancel(): void
}

Block Output Handlers

Block output handling can be customized through the getEffectiveBlockOutputs extension point.

Sources: [packages/testing/src/factories/executor-context.factory.ts:1-80]()

Workflow Blocks System

Related topics: Integrations and Connectors, Workflow Executor Engine

Section Related Pages

Continue reading this section for the full explanation and source context.

Section Core Components

Continue reading this section for the full explanation and source context.

Section Block State Model

Continue reading this section for the full explanation and source context.

Section Trigger Blocks

Continue reading this section for the full explanation and source context.

Related topics: Integrations and Connectors, Workflow Executor Engine

Workflow Blocks System

Overview

The Workflow Blocks System is the foundational architecture for building and executing automation workflows in the Sim platform. Blocks are the atomic units of execution that represent discrete operations, triggers, or control flow structures within a workflow. Each block encapsulates its own configuration, state, inputs, and outputs, allowing complex business logic to be constructed through visual composition or programmatically.

The system provides a declarative model where workflows are composed of interconnected blocks, with edges defining the data flow and execution order between them. This architecture enables both visual workflow design in the Sim editor and programmatic workflow manipulation through APIs.

Sources: apps/sim/lib/copilot/tools/server/workflow/edit-workflow/builders.ts:1-50

Block Architecture

Core Components

A block in the system consists of several key components:

ComponentDescription
idUnique identifier for the block within a workflow
typeThe block type identifier (e.g., 'agent', 'trigger', 'loop')
nameDisplay name shown in the UI
positionCoordinates for visual placement (x, y)
enabledBoolean flag controlling whether the block executes
subBlocksNested configuration objects with mode-based visibility
outputsExecution results produced by the block
dataArbitrary data associated with the block
metadataAdditional metadata including block type references

Sources: packages/workflow-persistence/src/load.ts:20-45

Block State Model

The block state represents the complete runtime and configuration state of a block:

interface BlockState {
  id: string
  type: string
  name: string
  position: { x: number; y: number }
  enabled: boolean
  horizontalHandles: boolean
  advancedMode: boolean
  triggerMode: boolean
  height: number
  subBlocks: Record<string, SubBlockState>
  outputs: Record<string, any>
  data: Record<string, any>
  locked: boolean
}

Sources: packages/workflow-persistence/src/load.ts:18-35

Block Types

Trigger Blocks

Trigger blocks initiate workflow execution and define how workflows can be started. The system supports multiple trigger types:

Trigger TypeConstantDescription
StartTRIGGER_TYPES.STARTPrimary entry point for workflows
APITRIGGER_TYPES.APIHTTP API triggered execution
ChatTRIGGER_TYPES.CHATConversational trigger
ManualTRIGGER_TYPES.MANUALManual invocation
InputTRIGGER_TYPES.INPUTInput parameter trigger
WebhookTRIGGER_TYPES.WEBHOOKWebhook-based triggers
ScheduleTRIGGER_TYPES.SCHEDULETime-based triggers
Generic WebhookTRIGGER_TYPES.GENERIC_WEBHOOKUniversal webhook receiver

Sources: apps/sim/lib/workflows/triggers/triggers.ts:1-30

Control Flow Blocks

Control flow blocks manage execution logic and flow:

Block TypePurpose
loopIteration control (for loops)
parallelParallel execution branches

The loop block stores its configuration in a separate workflowSubflows table with structure:

{
  id: string
  workflowId: string
  type: 'loop'
  config: {
    loopType: 'for'
    iterations: number
    nodes: string[]
  }
}

Sources: apps/realtime/src/database/operations.ts:1-40

SubBlock Modes

SubBlocks support different visibility modes that control their appearance in the UI:

ModeBehavior
basicShown in basic mode, hidden in advanced mode
advancedShown in advanced mode, hidden in basic mode
triggerVisible only when trigger mode is enabled
trigger-advancedVisible in trigger mode with advanced options

The visibility logic is implemented in the shouldUseSubBlockForTriggerModeCanonicalIndex function:

export function isTriggerModeSubBlock(subBlock: Pick<SubBlockConfig, 'mode'>): boolean {
  return subBlock.mode === 'trigger' || subBlock.mode === 'trigger-advanced'
}

export function isTriggerConfigSubBlock(subBlock: Pick<SubBlockConfig, 'type'>): boolean {
  return String(subBlock.type) === 'trigger-config'
}

Sources: apps/sim/lib/workflows/subblocks/visibility.ts:1-35

Trigger System

Trigger Classification

The trigger system classifies blocks based on their execution context:

graph TD
    A[Block Type] --> B{is Trigger Block?}
    B -->|Yes| C[Explicit Trigger]
    B -->|No| D{has triggerMode?}
    D -->|Yes| E[Tool with Trigger]
    D -->|No| F[Regular Block]

The TriggerUtils class provides static methods for trigger identification:

export class TriggerUtils {
  static isTriggerBlock(block: { type: string; triggerMode?: boolean }): boolean {
    const blockConfig = getBlock(block.type)
    return (
      blockConfig?.category === 'triggers' ||
      block.triggerMode === true ||
      block.type === TRIGGER_TYPES.STARTER
    )
  }

  static isTriggerType(block: { type: string }, triggerType: TriggerType): boolean {
    return block.type === triggerType
  }
}

Sources: apps/sim/lib/workflows/triggers/triggers.ts:80-100

Start Block Resolution

The system uses a priority-based approach to resolve start candidates for workflow execution:

graph TD
    A[Blocks Collection] --> B[Filter Disabled Blocks]
    B --> C[Classify Start Block Path]
    C --> D{Is Child Workflow?}
    D -->|Yes| E[Apply CHILD_PRIORITIES]
    D -->|No| F[Apply EXECUTION_PRIORITIES]
    E --> G[Return Sorted Candidates]
    F --> G

The resolveStartCandidates function implements this logic:

export function resolveStartCandidates<T extends MinimalBlock>(
  blocks: Record<string, T> | T[],
  options: ResolveStartOptions
): StartBlockCandidate<T>[] {
  const entries = toEntries(blocks)
  if (entries.length === 0) return []

  const priorities = options.isChildWorkflow
    ? CHILD_PRIORITIES
    : EXECUTION_PRIORITIES[options.execution]
  
  // ... filtering and candidate creation logic
}

Sources: apps/sim/lib/workflows/triggers/triggers.ts:150-180

Trigger Reference Aliases

The system maps reference aliases to concrete trigger types:

export const TRIGGER_REFERENCE_ALIAS_MAP = {
  start: TRIGGER_TYPES.START,
  api: TRIGGER_TYPES.API,
  chat: TRIGGER_TYPES.CHAT,
  manual: TRIGGER_TYPES.START,
} as const

These aliases are used in inline references like <api.*>, <chat.*>, enabling flexible trigger referencing within workflows.

Sources: apps/sim/lib/workflows/triggers/triggers.ts:60-68

Block Configuration

SubBlock Structure

SubBlocks allow nested configuration within blocks:

export function applyTriggerConfigToBlockSubblocks(
  block: any, 
  triggerConfig: Record<string, any>
) {
  if (!block?.subBlocks || !triggerConfig) return

  Object.entries(triggerConfig).forEach(([configKey, configValue]) => {
    const existingSubblock = block.subBlocks[configKey]
    if (existingSubblock) {
      // Compare values to avoid unnecessary updates
      const valuesEqual = /* comparison logic */
      
      if (!valuesEqual) {
        block.subBlocks[configKey] = {
          ...existingSubblock,
          value: configValue,
        }
      }
    } else {
      // Create new subblock
      block.subBlocks[configKey] = { /* new subblock */ }
    }
  })
}

Sources: apps/sim/lib/copilot/tools/server/workflow/edit-workflow/builders.ts:60-95

Trigger Mode Visibility

The system determines which blocks appear in different toolbar sections:

graph TD
    A[All Blocks] --> B{Is Hidden?}
    B -->|Yes| C[Exclude]
    B -->|No| D{Category === 'triggers'?}
    D -->|Yes| E[Include in Triggers Tab]
    D -->|No| F{Has Trigger Capability?}
    F -->|Yes| G[Include in Both Tabs]
    F -->|No| H[Include in Blocks Tab]

Functions getTriggersForSidebar() and getBlocksForSidebar() implement this filtering logic, excluding blocks with hideFromToolbar: true and treating blocks with trigger capability differently from those with explicit trigger category.

Sources: apps/sim/lib/workflows/triggers/trigger-utils.ts:1-40

Block Reference Tags

Block tags control how blocks are referenced in the system:

export function getBlockTags(block: BlockConfig): string[] {
  const normalizedBlockName = /* normalization logic */
  let blockTags = allTags

  const shouldShowRootTag =
    block.type === TRIGGER_TYPES.GENERIC_WEBHOOK || 
    block.type === 'start_trigger'
  
  if (!shouldShowRootTag) {
    blockTags = blockTags.filter((tag) => tag !== normalizedBlockName)
  }

  return blockTags
}

Sources: apps/sim/lib/workflows/blocks/block-reference-tags.ts:1-30

File Input Processing

Blocks can handle file inputs through a specialized processing system:

export async function processInputFileFields(
  input: unknown,
  blocks: SerializedBlock[],
  executionContext: { workspaceId: string; workflowId: string; executionId: string },
  requestId: string,
  userId?: string
): Promise<unknown> {
  // Find start block to extract input format
  const startBlock = blocks.find((block) => {
    const blockType = block.metadata?.id
    return (
      blockType === TRIGGER_TYPES.START ||
      blockType === TRIGGER_TYPES.API ||
      blockType === TRIGGER_TYPES.INPUT ||
      blockType === TRIGGER_TYPES.GENERIC_WEBHOOK ||
      blockType === TRIGGER_TYPES.STARTER
    )
  })

  // Process file fields from input format
  const fileFields = inputFormat.filter((field) => field.type === 'file[]')
  // ... file processing logic
}

Sources: apps/sim/lib/execution/files.ts:1-60

Persistence Layer

Database Schema

Block data is persisted using a hybrid approach:

TablePurpose
workflowBlocksCore block configuration and state
workflowSubflowsLoop and parallel block specific configs
workflowEdgesConnections between blocks

The upsert operation for blocks:

await tx
  .insert(workflowBlocks)
  .values(blockValues)
  .onConflictDoUpdate({
    target: workflowBlocks.id,
    set: {
      type: sql`excluded.type`,
      name: sql`excluded.name`,
      positionX: sql`excluded.position_x`,
      positionY: sql`excluded.position_y`,
      enabled: sql`excluded.enabled`,
      subBlocks: sql`excluded.sub_blocks`,
      outputs: sql`excluded.outputs`,
      data: sql`excluded.data`,
      updatedAt: sql`now()`,
    },
  })

Sources: apps/realtime/src/database/operations.ts:1-50

Loading and Assembly

When loading a workflow, blocks are assembled from database records:

blocks.forEach((block) => {
  const blockData = (block.data ?? {}) as BlockState['data']

  const assembled: BlockState = {
    id: block.id,
    type: block.type,
    name: block.name,
    position: {
      x: Number(block.positionX),
      y: Number(block.positionY),
    },
    enabled: block.enabled,
    subBlocks: (block.subBlocks as BlockState['subBlocks']) || {},
    outputs: (block.outputs as BlockState['outputs']) || {},
    data: blockData,
    locked: block.locked,
  }

  blocksMap[block.id] = assembled
})

Subflows (loops and parallels) are loaded separately:

subflows.forEach((subflow) => {
  const config = (subflow.config ?? {}) as Partial<Loop & Parallel>

  if (subflow.type === SUBFLOW_TYPES.LOOP) {
    loops[subflow.id] = config as Loop
  } else if (subflow.type === SUBFLOW_TYPES.PARALLEL) {
    parallels[subflow.id] = config as Parallel
  }
})

Sources: packages/workflow-persistence/src/load.ts:40-80

Testing Infrastructure

Block Assertions

The testing package provides assertion utilities for workflow validation:

export function expectBlockEnabled(
  blocks: Record<string, any>, 
  blockId: string
): void {
  const block = blocks[blockId]
  expect(block, `Block "${blockId}" should exist`).toBeDefined()
  expect(block.enabled, `Block "${blockId}" should be enabled`).toBe(true)
}

export function expectBlockPosition(
  blocks: Record<string, any>,
  blockId: string,
  expectedPosition: { x: number; y: number }
): void {
  const block = blocks[blockId]
  expect(block, `Block "${blockId}" should exist`).toBeDefined()
  expect(block.position.x, `Block "${blockId}" x position`)
    .toBeCloseTo(expectedPosition.x, 0)
  expect(block.position.y, `Block "${blockId}" y position`)
    .toBeCloseTo(expectedPosition.y, 0)
}

Sources: packages/testing/src/assertions/workflow.assertions.ts:1-60

Workflow Factories

Test utilities create standard workflow structures:

export function createLinearWorkflow(
  blockCount: number, 
  spacing = 200
): any {
  const blocks: Record<string, any> = {}
  const blockIds: string[] = []

  for (let i = 0; i < blockCount; i++) {
    const id = `block-${i}`
    blockIds.push(id)

    if (i === 0) {
      blocks[id] = createStarterBlock({ id, position: { x: i * spacing, y: 0 } })
    } else {
      blocks[id] = createFunctionBlock({ id, name: `Step ${i}`, position: { x: i * spacing, y: 0 } })
    }
  }

  return createWorkflowState({ blocks, edges: createLinearEdges(blockIds) })
}

Sources: packages/testing/src/factories/workflow.factory.ts:1-40

Summary

The Workflow Blocks System provides a comprehensive foundation for building automation workflows through:

  1. Declarative Block Model: Each block encapsulates its own configuration, state, and outputs
  2. Flexible Trigger System: Multiple trigger types support various execution entry points
  3. Mode-Based Visibility: SubBlocks can be shown/hidden based on basic, advanced, or trigger modes
  4. Persistent State: Blocks are persisted to the database with proper upsert semantics
  5. Type-Safe Testing: Comprehensive assertion utilities enable robust workflow testing

The architecture separates concerns between block definition, execution, persistence, and testing, enabling a clean and maintainable codebase for workflow automation.

Sources: [apps/sim/lib/copilot/tools/server/workflow/edit-workflow/builders.ts:1-50]()

Integrations and Connectors

Related topics: Workflow Blocks System, Background Jobs and Background Processing

Section Related Pages

Continue reading this section for the full explanation and source context.

Section Core Components

Continue reading this section for the full explanation and source context.

Section Trigger Types

Continue reading this section for the full explanation and source context.

Section Gong Webhook Integration

Continue reading this section for the full explanation and source context.

Related topics: Workflow Blocks System, Background Jobs and Background Processing

Integrations and Connectors

Overview

Sim provides a comprehensive integrations and connectors system that enables AI agents to interact with external services, APIs, and platforms. This system forms the backbone of Sim's workflow automation capabilities, allowing users to connect 1,000+ integrations and LLMs to orchestrate agentic workflows.

The connector architecture is designed around a plugin-based system where each integration is implemented as a self-contained module with standardized interfaces for authentication, API communication, and data transformation.

Architecture

The Sim integration system consists of several interconnected layers:

graph TD
    A[Workflows / Agents] --> B[Trigger System]
    B --> C[Connectors Registry]
    C --> D[Individual Connectors]
    D --> E[Slack Connector]
    D --> F[GitHub Connector]
    D --> G[Custom Connectors]
    A --> H[Webhook Providers]
    H --> I[Gong Webhook]
    H --> J[Vercel Webhook]
    C --> K[API Contracts]
    K --> L[Type Definitions]
    D --> M[External APIs]

Core Components

ComponentPurposeLocation
Connectors RegistryCentral hub for managing all connector instancesapps/sim/connectors/registry.ts
Type DefinitionsShared interfaces and types for connectorsapps/sim/connectors/types.ts
API ContractsZod schemas defining API request/response shapesapps/sim/lib/api/contracts/types.ts
Trigger SystemEvent-driven connector activationapps/sim/lib/workflows/triggers/triggers.ts
Webhook ProvidersInbound integration handlersapps/sim/lib/webhooks/providers/

Connector Registry

The connector registry (apps/sim/connectors/registry.ts) serves as the central management system for all connectors in the platform. It provides:

  • Registration and lookup of connector instances
  • Configuration management for each connector
  • Lifecycle management (initialize, authenticate, execute, cleanup)
  • Unified interface for accessing connector functionality
graph LR
    A[Request] --> B[Registry Lookup]
    B --> C{Connector Found?}
    C -->|Yes| D[Execute Connector]
    C -->|No| E[Return Error]
    D --> F[Transform Response]
    F --> G[Return to Caller]

Trigger System

Triggers work in conjunction with connectors to activate workflows based on external events. The trigger system classifies different activation modes:

graph TD
    A[Block] --> B{Start Workflow Mode}
    B -->|chat| C[Chat Trigger]
    B -->|api| D[API Trigger]
    B -->|run| E[API Trigger]
    B -->|manual| F[Manual Trigger]
    B -->|undefined| F

Trigger Types

Trigger TypeClassificationUse Case
startManual/APIInitial workflow activation
apiAPI-basedProgrammatic workflow execution
chatConversationalChat-initiated workflows
manualUser-initiatedManual workflow triggers

The trigger reference alias map provides convenient access to trigger types:

export const TRIGGER_REFERENCE_ALIAS_MAP = {
  start: TRIGGER_TYPES.START,
  api: TRIGGER_TYPES.API,
  chat: TRIGGER_TYPES.CHAT,
  manual: TRIGGER_TYPES.START,
} as const

Sources: apps/sim/lib/workflows/triggers/triggers.ts:32-37

Webhook Providers

Sim integrates with external services through webhook providers that normalize incoming events into a standardized format.

Gong Webhook Integration

The Gong webhook provider handles call recording and analytics data:

interface GongWebhookPayload {
  callId: string
  metaData: Record<string, unknown>
  parties: unknown[]
  context: unknown[]
  trackers: unknown[]
  topics: unknown[]
  highlights: unknown[]
  eventType: 'gong.automation_rule'
}

Sources: apps/sim/lib/webhooks/providers/gong.ts:1-15

Vercel Webhook Integration

The Vercel webhook provider processes deployment events with comprehensive metadata extraction:

interface VercelDeploymentData {
  id: string
  url: string
  name: string
  meta: Record<string, unknown>
  project?: {
    id: string
    name: string
  }
  team?: {
    id: string
  }
  user?: {
    id: string
  }
  target?: string
  plan?: string
}

Sources: apps/sim/lib/webhooks/providers/vercel.ts:25-40

Slack Integration

Slack is a first-class citizen in Sim's connector ecosystem, with comprehensive API contract definitions:

Slack API Contracts

ContractPurposeResponse Type
slackReadMessagesContractFetch messages from channelsSlackReadMessagesResponse
slackAddReactionContractAdd reactions to messagesSlackReactionResponse
slackDeleteMessageContractDelete messagesSlackDeleteMessageResponse
slackUpdateMessageContractEdit existing messagesSlackUpdateMessageResponse
slackSendEphemeralContractSend ephemeral messagesSlackSendEphemeralResponse
slackDownloadContractDownload files/contentSlackDownloadResponse
export type SlackReadMessagesResponse = ContractJsonResponse<typeof slackReadMessagesContract>
export type SlackReactionResponse = ContractJsonResponse<typeof slackAddReactionContract>
export type SlackDeleteMessageResponse = ContractJsonResponse<typeof slackDeleteMessageContract>
export type SlackUpdateMessageResponse = ContractJsonResponse<typeof slackUpdateMessageContract>
export type SlackSendEphemeralResponse = ContractJsonResponse<typeof slackSendEphemeralContract>
export type SlackDownloadResponse = ContractJsonResponse<typeof slackDownloadContract>

Sources: apps/sim/lib/api/contracts/tools/communication/slack.ts:1-10

API Contract System

The API contract system uses Zod schemas for runtime validation of all API interactions:

Contract Type Utilities

export type ContractParams<C extends AnyApiRouteContract> = C extends ApiRouteContract<
  infer TParams,
  ApiSchema | undefined,
  ApiSchema | undefined,
  ApiSchema | undefined,
  ResponseMode,
  ApiSchema | undefined
>
  ? EmptySchemaOutput<TParams>
  : undefined

export type ContractBody<C extends AnyApiRouteContract> = C extends ApiRouteContract<
  ApiSchema | undefined,
  ApiSchema | undefined,
  infer TBody,
  ApiSchema | undefined,
  ResponseMode,
  ApiSchema | undefined
>
  ? EmptySchemaOutput<TBody>
  : undefined

Sources: apps/sim/lib/api/contracts/types.ts:1-30

Generic Contract Types

TypeDescription
ContractParams<C>Extracted URL parameter types from contract
ContractQuery<C>Extracted query string types from contract
ContractBody<C>Extracted request body types from contract
ContractHeaders<C>Extracted header types from contract
ContractParamsInput<C>Input types for contract parameters

Virtual Filesystem Integration

Connectors are exposed to AI agents through the virtual filesystem (VFS), which materializes workspace data into an in-memory file system structure:

graph TD
    A[Workspace] --> B[Virtual Filesystem]
    B --> C[workflows/{name}/meta.json]
    B --> D[workflows/{name}/state.json]
    B --> E[workflows/{name}/executions.json]
    B --> F[knowledgebases/{name}/meta.json]
    B --> G[connectors.json]
    B --> H[triggers/{id}.json]

The VFS exposes connector configurations to agents:

files.set(
  'knowledgebases/{name}/connectors.json',
  serializeConnectorConfigs(connectorConfigs)
)

Sources: apps/sim/lib/copilot/vfs/workspace-vfs.ts:1-50

Connector Configuration

Connectors follow a standardized configuration schema defined in apps/sim/connectors/types.ts. Each connector instance includes:

  • Provider: The external service name (e.g., slack, github)
  • Credentials: Authentication tokens and secrets
  • Settings: Connector-specific configuration options
  • Metadata: Display name, description, category

Adding New Connectors

To add a new connector to the Sim platform:

  1. Create a new directory under apps/sim/connectors/{provider}/
  2. Implement the connector class with required interface methods
  3. Register the connector in the registry
  4. Define API contracts in apps/sim/lib/api/contracts/
  5. Add webhook handler if inbound events are needed
  6. Update the VFS serialization logic if agent access is required

Best Practices

  • Always use API contracts for type-safe API calls
  • Implement proper error handling and retry logic
  • Store credentials securely (never commit to repository)
  • Follow the trigger classification pattern for event-driven workflows
  • Use the virtual filesystem for any data that should be accessible to agents

Sources: [apps/sim/lib/workflows/triggers/triggers.ts:32-37]()

Agent System

Related topics: Copilot System, Workflow Blocks System

Section Related Pages

Continue reading this section for the full explanation and source context.

Section Core Responsibilities

Continue reading this section for the full explanation and source context.

Section Key Functions

Continue reading this section for the full explanation and source context.

Section Execution Flow

Continue reading this section for the full explanation and source context.

Related topics: Copilot System, Workflow Blocks System

Agent System

The Agent System is a core component of the Sim platform that enables the execution of AI agents within workflow automation. It provides the infrastructure for creating, managing, and executing agents that can interact with tools, maintain conversation context, and process complex multi-step tasks.

Overview

The Agent System serves as the execution layer for AI-driven automation within Sim workflows. It handles the lifecycle of agent execution, including initialization, tool invocation, state management, memory handling, and result processing.

Core Responsibilities

  • Agent Execution Pipeline: Orchestrates the execution of agent logic within workflow contexts
  • Memory Management: Maintains conversation history and context across agent interactions
  • Skills Resolution: Resolves and binds available skills and tools to agent instances
  • State Coordination: Manages agent state transitions and execution checkpoints
  • Tool Integration: Handles the invocation and management of external tools and APIs

Architecture

graph TD
    A[Workflow Engine] --> B[Agent Handler]
    B --> C[Memory Manager]
    B --> D[Skills Resolver]
    B --> E[Tool Executor]
    C --> F[Context Store]
    D --> G[Block Registry]
    E --> H[Execution Context]
    H --> I[Streaming Response]
    F --> H

Agent Handler

The agent-handler.ts serves as the primary orchestrator for agent execution. It manages the interaction between the workflow engine and the agent's internal components.

Key Functions

FunctionPurposeSource
handleAgentExecutionMain entry point for agent processingagent-handler.ts
executeToolAndReportInvokes tools and streams results backtool.ts:52
registerPendingToolPromiseTracks async tool executionstool.ts:45
abortPendingToolIfStreamDeadHandles stalled tool executionstool.ts:66

Execution Flow

sequenceDiagram
    participant Workflow
    participant AgentHandler
    participant ToolExecutor
    participant Memory
    participant SkillsResolver

    Workflow->>AgentHandler: Execute Agent
    AgentHandler->>SkillsResolver: Resolve Available Skills
    SkillsResolver-->>AgentHandler: Skill Bindings
    AgentHandler->>Memory: Initialize Context
    Memory-->>AgentHandler: Context State
    AgentHandler->>ToolExecutor: Invoke Tool
    ToolExecutor-->>AgentHandler: Tool Result
    AgentHandler->>Memory: Update State
    AgentHandler-->>Workflow: Execution Result

Tool Execution Modes

The agent handler supports multiple execution modes for tool invocation:

ModeDescriptionConfiguration
autoExecuteToolsAutomatically execute tools without user confirmationoptions.autoExecuteTools !== false
interactiveRequire user confirmation before tool executionoptions.interactive === true
parallelExecute multiple tools concurrentlyParallel promise registration
clientExecutableDelegate execution to client workflowclientExecutable === true

Memory System

The Memory System (memory.ts) maintains conversation context and execution history for agents, enabling stateful interactions across multiple workflow steps.

Memory Operations

interface AgentMemory {
  conversationHistory: ConversationTurn[]
  executionContext: Record<string, any>
  blockOutputs: Map<string, BlockOutput>
  timestamps: MemoryTimestamps
}

Context Management

OperationDescriptionSource Reference
Store TurnSave a conversation interactionmemory.ts
Retrieve ContextLoad previous state for agentmemory.ts
Clear MemoryReset context for new sessionmemory.ts
Merge ContextCombine multiple context sourcesmemory.ts

Skills Resolver

The Skills Resolver (skills-resolver.ts) binds available tools and capabilities to agent instances based on workflow configuration and agent requirements.

Resolution Process

  1. Skill Discovery: Scan available tool registry for compatible skills
  2. Capability Matching: Match agent requirements with available tools
  3. Binding: Create stable references between agent and tools
  4. Validation: Verify all required skills are available

Skill Configuration

interface SkillBinding {
  skillId: string
  toolName: string
  parameters: SkillParameters
  enabled: boolean
  priority: number
}

Type System

The Agent System defines comprehensive TypeScript types in types.ts to ensure type safety across all components.

Core Types

TypeDescriptionUsage
StreamingContextManages streaming response stateTool execution tracking
ExecutionContextHolds runtime execution dataBlock state, outputs
OrchestratorOptionsConfiguration for agent behaviorExecution parameters
ToolScopeDefines execution scopemain or subagent
ToolCallStateTracks individual tool call statusExecution monitoring

Tool Call States

stateDiagram-v2
    [*] --> pending: Tool Call Created
    pending --> executing: Execution Started
    executing --> success: Completed Successfully
    executing --> error: Execution Failed
    executing --> cancelled: User Cancelled
    success --> skipped: Result Not Needed
    error --> retry: Retry Attempt

Agent Block Definition

The Agent Block (agent.ts) defines the block-level configuration and metadata for agents within the Sim workflow system.

Block Structure

interface AgentBlockConfig {
  name: string
  description: string
  category: BlockCategory
  inputs: InputSpecification[]
  outputs: OutputSpecification[]
  parameters: AgentParameters
}

Block Categories

CategoryDescriptionExample Usage
agentPrimary agent implementationMain workflow agent
subagentNested agent for delegationSpecialized task agents
clientClient-delegated executionExternal system integration

Execution Context Factory

The executor-context.factory.ts provides utilities for creating and manipulating executor contexts used throughout the agent system.

Factory Functions

FunctionPurposeSource
createExecutorContextInitialize new execution contextexecutor-context.factory.ts
createExecutorContextWithBlocksCreate context with pre-executed blocksexecutor-context.factory.ts
addBlockStateAdd block state to existing contextexecutor-context.factory.ts
createMinimalWorkflowCreate workflow for contextexecutor-context.factory.ts

Executor Context Structure

interface ExecutorContext {
  blockStates: Map<string, ExecutorBlockState>
  executedBlocks: Set<string>
  workflow: SerializedWorkflow
  connections: SerializedConnection[]
  requestId: string
  abortSignal?: AbortSignal
}

Tool Execution Pipeline

graph LR
    A[Tool Call Request] --> B{Interactive Mode?}
    B -->|Yes| C{Client Executable?}
    B -->|No| D{Auto Execute?}
    C -->|Workflow Tool| E[Delegate to Client]
    C -->|Sim Executed| F[Execute Tool]
    D -->|Yes| G[Fire Tool Execution]
    D -->|No| H[Wait for Confirmation]
    E --> I[Update Tool State]
    F --> I
    G --> I
    H --> I
    I --> J[Report Result]
    J --> K[Update Memory]

Result Handling

The agent system handles multiple outcome types for tool executions:

OutcomeStatusDescription
successCompleted successfullyTool executed without errors
errorExecution failedTool encountered an error
cancelledUser cancelledExecution was manually stopped
skippedNot neededResult was no longer required

Configuration Options

Orchestrator Options

interface OrchestratorOptions {
  interactive?: boolean        // Require user confirmation
  autoExecuteTools?: boolean   // Auto-execute without prompt
  abortSignal?: AbortSignal    // Cancellation token
  timeout?: number             // Execution timeout
}

Streaming Context

interface StreamingContext {
  requestId: string
  toolCalls: Map<string, ToolCallState>
  pendingPromises: Map<string, Promise<ToolResult>>
  onToolResult?: (result: ToolResult) => void
}

Error Handling

The agent system implements comprehensive error handling across all execution paths:

  1. Tool Execution Errors: Caught and wrapped in standard error format
  2. Stream Dead Detection: Aborts pending tools when stream becomes unresponsive
  3. Timeout Handling: Respects abort signals for long-running operations
  4. State Validation: Ensures consistency before state transitions

Error Response Format

interface ToolErrorResponse {
  status: MothershipStreamV1ToolOutcome.error
  message: string
  data: {
    error: string
  }
}

Integration Points

The Agent System integrates with multiple platform components:

ComponentIntegration TypeData Flow
Workflow EngineParent orchestratorInitializes agent execution
Block RegistryTool resolutionDiscovers available skills
Memory StoreState persistenceMaintains conversation history
Webhook ProvidersExternal triggersReceives external events
Billing SystemUsage trackingRecords execution metrics

Testing

The Agent System includes comprehensive test coverage in agent-handler.test.ts and related test files, covering:

  • Tool execution scenarios (success, failure, cancellation)
  • Memory state management
  • Skills resolution logic
  • Streaming response handling
  • Error propagation paths

Source: https://github.com/simstudioai/sim / Human Manual

Copilot System

Related topics: Agent System, Architecture Overview

Section Related Pages

Continue reading this section for the full explanation and source context.

Section Key Capabilities

Continue reading this section for the full explanation and source context.

Section Chat Stream Endpoint

Continue reading this section for the full explanation and source context.

Section Checkpoints Route

Continue reading this section for the full explanation and source context.

Related topics: Agent System, Architecture Overview

Copilot System

The Copilot System is a Sim-managed AI-powered service that enables users to generate workflow nodes, fix errors, and iterate on flows directly from natural language instructions. It serves as an intelligent assistant embedded within the Sim platform, providing real-time assistance for workflow creation and modification.

Overview

Copilot acts as an intelligent layer between users and the workflow engine, translating natural language inputs into executable workflow components. The system leverages large language models to understand user intent and generate appropriate code blocks, connections, and configurations within the Sim workflow environment.

Key Capabilities

CapabilityDescription
Node GenerationCreate new workflow blocks from natural language descriptions
Error ResolutionIdentify and fix issues in existing workflows
Flow IterationModify and improve workflow structures through conversational commands
Analytics TrackingMonitor all Copilot operations for performance and billing

Architecture

The Copilot System consists of multiple API endpoints and tracking components that work together to provide a seamless AI-assisted experience.

graph TD
    A[User Input] --> B[Copilot API Layer]
    B --> C[Chat Stream Route]
    B --> D[Checkpoints Route]
    B --> E[Models Route]
    B --> F[Training Route]
    C --> G[Trace Span Tracking]
    G --> H[Analytics Collection]
    H --> I[Billing System]

API Endpoints

The Copilot System exposes several REST API endpoints for different operations:

Chat Stream Endpoint

Handles real-time streaming of chat responses for Copilot interactions. This endpoint manages the bidirectional communication between the client and the AI model, providing immediate feedback as the Copilot processes natural language requests.

Checkpoints Route

Provides functionality for saving and retrieving workflow checkpoints during Copilot-assisted editing. This allows users to maintain version history and revert to previous states if needed.

Models Route

Manages the available AI models that power Copilot functionality. The system supports multiple model configurations and allows for dynamic model selection based on task requirements.

Training Route

Handles model fine-tuning and custom training workflows. This endpoint enables the system to learn from user interactions and improve response accuracy over time.

Trace Span Instrumentation

The Copilot System implements comprehensive OpenTelemetry trace spans for observability and monitoring. All trace span identifiers are defined in a generated contract file that ensures type safety and consistency between the frontend and backend.

Trace Span Categories

CategorySpansPurpose
Chat Operationschat.*Track conversation flow and tool usage
Analyticscopilot.analytics.*Monitor request metrics and billing
Context Managementcontext.*Track context window operations
Authenticationauth.*Security and rate limiting events

Key Trace Span Identifiers

IdentifierDescription
copilot.analytics.flushAnalytics batch flush operation
copilot.analytics.save_requestPersist individual request data
copilot.analytics.update_billingUpdate billing metrics
chat.setupInitialize chat session
chat.continue_with_tool_resultsProcess tool execution results
context.reduceContext window reduction
context.summarize_chunkSummarize large context chunks
auth.validate_keyAPI key validation
auth.rate_limit.recordRate limit tracking

Integration with Workflows

Copilot integrates deeply with the Sim workflow engine through the block system. When generating nodes or fixing errors, Copilot communicates with the workflow registry to validate and persist changes.

Workflow Registry Integration

The system uses Zustand for state management when interacting with workflow data:

// Mock structure from test files
useWorkflowRegistry: {
  getState: () => ({
    activeWorkflowId: null,
  }),
}

Block Generation

Copilot generates workflow blocks by:

  1. Parsing natural language input
  2. Identifying required block types from the registry
  3. Validating block configurations against schema definitions
  4. Persisting generated blocks to the workflow state

Self-Hosted Deployment

For self-hosted Sim instances, Copilot requires separate configuration:

API Key Setup

  1. Navigate to https://sim.ai
  2. Go to Settings → Copilot
  3. Generate a Copilot API key
  4. Set the COPILOT_API_KEY environment variable in apps/sim/.env
# Example environment variable
COPILOT_API_KEY=your_generated_api_key_here

Environment Configuration

The Copilot API key must be configured alongside other environment variables defined in the project's .env.example file. The system validates the API key on each request through the auth.validate_key trace span.

Analytics and Billing

The Copilot System implements a comprehensive analytics pipeline:

Analytics Flow

graph LR
    A[User Request] --> B[Save Request]
    B --> C[Update Billing]
    C --> D[Flush Analytics]
    D --> E[Persist to Storage]

Tracked Metrics

MetricTrace SpanDescription
Request Countcopilot.analytics.save_requestIndividual Copilot invocations
Billing Unitscopilot.analytics.update_billingUsage-based billing data
Flush Eventscopilot.analytics.flushBatch processing completion

Error Handling

The Copilot System handles various error scenarios through dedicated trace spans:

Error TypeTrace SpanHandling
Explicit Abortchat.explicit_abort.*Graceful termination of requests
Rate Limitingauth.rate_limit.recordThrottling and quota enforcement
Auth Failuresauth.validate_keyInvalid API key rejection

Dependencies

The Copilot functionality is managed through the Bun workspace and depends on the following core packages:

  • Next.js (App Router) - API route handling
  • Drizzle ORM - Data persistence
  • Zod - Schema validation for API contracts
  • Zustand - Client-side state management

Development dependencies include the documentation generator script located at scripts/generate-docs.ts which can be run with bun run generate-docs.

Summary

The Copilot System provides intelligent assistance for workflow creation and modification within Sim. Through a combination of streaming chat APIs, comprehensive trace instrumentation, and deep workflow integration, it enables natural language-driven development experiences. The system is designed for both cloud-hosted and self-hosted deployments, with full observability through OpenTelemetry trace spans and analytics tracking.

Source: https://github.com/simstudioai/sim / Human Manual

Deployment Guide

Related topics: Technology Stack, Background Jobs and Background Processing

Section Related Pages

Continue reading this section for the full explanation and source context.

Section Hardware Requirements

Continue reading this section for the full explanation and source context.

Section Software Requirements

Continue reading this section for the full explanation and source context.

Section Quick Start

Continue reading this section for the full explanation and source context.

Related topics: Technology Stack, Background Jobs and Background Processing

Deployment Guide

Sim is a workflow automation platform that supports multiple deployment configurations. This guide covers self-hosted deployment options including Docker, Docker Compose, manual setup, and Kubernetes via Helm charts.

Overview

Sim can be deployed in several ways depending on your infrastructure requirements and operational capabilities:

Deployment MethodUse CaseComplexity
NPM Package (Docker)Quick local testingLow
Docker ComposeSingle-server productionMedium
Manual SetupCustom infrastructureHigh
Helm ChartKubernetes clustersMedium-High

Prerequisites

Hardware Requirements

ComponentMinimumRecommended
CPU2 cores4+ cores
Memory4 GB RAM8+ GB RAM
Disk20 GB50+ GB SSD
Docker20.10+Latest

Software Requirements

  • Docker must be installed and running
  • Bun runtime (for manual setup)
  • Node.js v20+ (for manual setup)
  • PostgreSQL 12+ with pgvector extension (for manual setup)

Self-Hosted: NPM Package (Docker)

The fastest way to get started with Sim using Docker.

Quick Start

npx simstudio

This launches Sim at http://localhost:3000 Sources: README.md

Command Options

FlagDescriptionDefault
-p, --port <port>Port to run Sim on3000
--no-pullSkip pulling latest Docker images-

Example Usage

# Run on custom port
npx simstudio --port 8080

# Skip image pull (use cached images)
npx simstudio --no-pull

Self-Hosted: Docker Compose

For production deployments on a single server, use the production Docker Compose configuration.

Standard Deployment

git clone https://github.com/simstudioai/sim.git && cd sim
docker compose -f docker-compose.prod.yml up -d

Open http://localhost:3000 to access Sim Sources: README.md

Architecture

graph TB
    subgraph "Docker Compose Stack"
        A["Next.js App<br>:3000"] --> B["PostgreSQL<br>:5432"]
        A --> C["Redis<br>:6379"]
        A --> D["Realtime Service<br>:3001"]
        D --> B
        D --> C
    end
    E["External Services"] --> A

Services

ServiceImagePortPurpose
appsimstudio/sim-app3000Main Next.js application
realtimesimstudio/sim-realtime3001WebSocket/real-time events
postgrespgvector/pgvector5432Database with vector support
redisredis:alpine6379Caching and session storage

Self-Hosted: Local Models (Ollama/vLLM)

Sim supports local AI models via Ollama and vLLM for privacy-focused or offline deployments.

Ollama Integration

Sim integrates with Ollama to run local models for workflow execution.

git clone https://github.com/simstudioai/sim.git && cd sim
docker compose -f docker-compose.ollama.yml up -d

#### Ollama Configuration

graph LR
    A["Sim App"] --> B["Ollama Service"]
    B --> C["Local Models<br>llama2, mistral, etc."]

#### Supported Ollama Environment Variables

VariableDescriptionDefault
OLLAMA_HOSTOllama server URLhttp://localhost:11434
OLLAMA_MODELDefault model to use-

vLLM Integration

For high-performance local inference, configure vLLM:

# docker-compose.override.yml
services:
  app:
    environment:
      VLLM_HOST: "http://vllm:8000"
      VLLM_MODEL: "meta-llama/Llama-2-7b-hf"

See the Docker self-hosting docs for detailed setup instructions Sources: README.md

Self-Hosted: Manual Setup

For custom infrastructure or development environments, install Sim manually.

Step 1: Clone and Install Dependencies

git clone https://github.com/simstudioai/sim.git
cd sim
bun install
bun run prepare  # Set up pre-commit hooks

Step 2: PostgreSQL with pgvector Setup

docker run --name simstudio-db \
  -e POSTGRES_PASSWORD=your_password \
  -e POSTGRES_DB=simstudio \
  -p 5432:5432 \
  -d \
  pgvector/pgvector:pg16

Step 3: Environment Configuration

Copy the example environment file and configure:

cd apps/sim
cp .env.example .env
# Edit .env with your configuration

#### Application Environment Variables

VariableDescriptionRequired
DATABASE_URLPostgreSQL connection stringYes
BETTER_AUTH_SECRETSecret for authenticationYes
ENCRYPTION_KEYData encryption keyYes
NEXT_PUBLIC_APP_URLPublic application URLYes
BETTER_AUTH_URLAuthentication service URLYes
INTERNAL_API_SECRETInternal API authenticationYes
CRON_SECRETCron job authenticationYes

Step 4: Build and Start

bun run build
bun run start

Development Environment (Dev Container)

The repository includes a pre-configured development container.

Structure

graph TB
    subgraph ".devcontainer"
        A["Dev Container"] --> B["PostgreSQL"]
        A --> C["Redis"]
        A --> D["MailHog"]
    end
    A --> E["Sim App"]
    A --> F["Realtime Service"]

Dev Container Services

ServicePortPurpose
Sim App3000Main application
PostgreSQL5432Database
Redis6379Caching
MailHog8025Email testing

Helm Chart Deployment

For Kubernetes clusters, use the official Helm chart.

Installation

helm install sim ./helm/sim \
  --namespace sim \
  --create-namespace

Production Configuration

# values.yaml
app:
  replicaCount: 3
  env:
    NEXT_PUBLIC_APP_URL: "https://sim.example.com"
    BETTER_AUTH_URL: "https://sim.example.com"

postgresql:
  auth:
    database: simstudio
  primary:
    persistence:
      size: 50Gi

monitoring:
  enabled: true
  prometheus:
    enabled: true

Secrets Management

The Helm chart supports three methods for managing secrets, in order of production-readiness:

#### Method 1: Inline --set (Development Only)

helm install sim ./helm/sim --set app.env.BETTER_AUTH_SECRET=...
⚠️ Warning: Values set this way appear in helm get values output. Not recommended for production.

#### Method 2: Pre-existing Kubernetes Secret

kubectl create secret generic sim-app-secrets --namespace sim \
  --from-literal=BETTER_AUTH_SECRET=$(openssl rand -hex 32) \
  --from-literal=ENCRYPTION_KEY=$(openssl rand -hex 32) \
  --from-literal=INTERNAL_API_SECRET=$(openssl rand -hex 32) \
  --from-literal=CRON_SECRET=$(openssl rand -hex 32)

kubectl create secret generic sim-postgres-secret --namespace sim \
  --from-literal=POSTGRES_PASSWORD=$(openssl rand -base64 24 | tr -d '/+=')

Reference secrets in values:

app:
  secrets:
    existingSecret:
      enabled: true
      name: sim-app-secrets

postgresql:
  auth:
    existingSecret:
      enabled: true
      name: sim-postgres-secret
      passwordKey: POSTGRES_PASSWORD

#### Method 3: External Secrets Operator (Recommended for Production)

Integrate with AWS Secrets Manager, HashiCorp Vault, or Azure Key Vault.

Autoscaling Configuration

autoscaling:
  enabled: true
  minReplicas: 2
  maxReplicas: 20
  targetCPUUtilizationPercentage: 70
  targetMemoryUtilizationPercentage: 80

When autoscaling.enabled=true, the chart omits spec.replicas from the Deployment so the HPA owns replica count. Requires metrics-server in the cluster Sources: helm/sim/README.md

Network Policy

Enable east-west isolation and block cloud metadata endpoints:

networkPolicy:
  enabled: true

Key Helm Configuration Reference

ParameterDescriptionDefault
app.replicaCountNumber of app replicas1
app.image.repositoryApp image repositorysimstudio/sim-app
app.image.tagApp image tagappVersion
app.env.NEXT_PUBLIC_APP_URLPublic app URLlocalhost:3000
app.env.BETTER_AUTH_URLAuth service URLlocalhost:3000
autoscaling.enabledEnable HPAfalse
monitoring.enabledEnable monitoringfalse
networkPolicy.enabledEnable network policiesfalse

Important URLs Configuration

⚠️ Critical: app.env.NEXT_PUBLIC_APP_URL and app.env.BETTER_AUTH_URL must match your public origin (e.g., https://sim.example.com). Leaving them as localhost breaks sign-in functionality.

Environment Variables Reference

Application (.env.example)

VariableRequiredDescription
DATABASE_URLYesPostgreSQL connection string
BETTER_AUTH_SECRETYesAuthentication secret
BETTER_AUTH_URLYesAuthentication service URL
NEXT_PUBLIC_APP_URLYesPublic application URL
ENCRYPTION_KEYYesData encryption key
INTERNAL_API_SECRETYesInternal API secret
CRON_SECRETYesCron job secret
REDIS_URLNoRedis connection URL
SOCKET_SERVER_URLNoWebSocket server URL
OLLAMA_URLNoOllama server URL
SMTP_*NoEmail configuration

Realtime Service (.env.example)

VariableRequiredDescription
REDIS_URLYesRedis connection URL
DATABASE_URLYesPostgreSQL connection string
INTERNAL_API_SECRETYesInternal API secret

Testing Deployments

Load Testing

The repository includes Artillery load testing configurations:

# Workflow load testing
bunx artillery run scripts/load/workflow-waves.yml

# Isolation testing
bunx artillery run scripts/load/workflow-isolation.yml

Docker Health Checks

# Check service status
docker compose ps

# View logs
docker compose logs -f app

# Restart services
docker compose restart

Troubleshooting

Common Issues

IssueSolution
Sign-in failsVerify NEXT_PUBLIC_APP_URL and BETTER_AUTH_URL match public origin
Database connection failedCheck DATABASE_URL and ensure PostgreSQL is running
WebSocket connection failedVerify SOCKET_SERVER_URL is accessible
Image pull failsUse --no-pull flag or check Docker registry access
Autoscaling not workingEnsure metrics-server is installed in cluster

Log Locations

EnvironmentLog Command
Docker Composedocker compose logs -f [service]
Kuberneteskubectl logs -n sim -l app=sim
Helmhelm status sim -n sim

Next Steps

Source: https://github.com/simstudioai/sim / Human Manual

Background Jobs and Background Processing

Related topics: Workflow Executor Engine, Deployment Guide

Section Related Pages

Continue reading this section for the full explanation and source context.

Section System Components

Continue reading this section for the full explanation and source context.

Section Backend Types

Continue reading this section for the full explanation and source context.

Section Core Interface Methods

Continue reading this section for the full explanation and source context.

Related topics: Workflow Executor Engine, Deployment Guide

Background Jobs and Background Processing

Overview

The Sim platform uses a robust background job system to handle asynchronous, long-running, and resource-intensive operations outside the request-response cycle. This architecture enables workflows, schedules, webhooks, and other processing tasks to execute reliably without blocking user interactions.

The background processing system is designed around a job queue abstraction that supports multiple backend implementations, allowing the platform to scale horizontally and handle high-throughput scenarios with proper concurrency control.

Architecture

System Components

The background processing architecture consists of three main layers:

  1. Job Queue Interface - A unified abstraction for enqueuing, monitoring, and managing jobs
  2. Backend Implementations - Pluggable backends (database, trigger.dev) that handle actual job processing
  3. Job Handlers - Specific implementations for different job types (workflow, schedule, webhook, etc.)
graph TD
    subgraph "Job Producers"
        API[API Request]
        Schedule[Scheduled Trigger]
        Webhook[Webhook Trigger]
        Table[Table Cell Execution]
    end

    subgraph "Job Queue Interface"
        Queue[JobQueue API]
        Enqueue[enqueue / batchEnqueue]
        GetJob[getJob / startJob]
        Cancel[cancelJob]
    end

    subgraph "Backends"
        DB[(Database Backend)]
        TD[Trigger.dev Backend]
    end

    subgraph "Job Handlers"
        WE[Workflow Execution]
        SE[Schedule Execution]
        HE[Webhook Execution]
        KC[Knowledge Connector Sync]
        RE[Resume Execution]
    end

    API --> Queue
    Schedule --> Queue
    Webhook --> Queue
    Table --> Queue

    Queue --> Enqueue
    Queue --> GetJob
    Queue --> Cancel

    Enqueue --> DB
    Enqueue --> TD

    DB --> WE
    DB --> SE
    DB --> HE
    DB --> KC
    DB --> RE

    TD --> WE

Backend Types

The system supports two backend implementations for job queues:

Backend TypeIdentifierDescription
DatabasedatabaseBuilt-in queue using database storage, suitable for self-hosted deployments
Trigger.devtrigger-devExternal job processing service for cloud deployments

Sources: apps/sim/lib/core/async-jobs/types.ts:40

Job Queue Interface

Core Interface Methods

The JobQueue interface provides a unified API for all job operations:

MethodParametersReturnsDescription
enqueuetype: JobType, payload: TPayload, options?: EnqueueOptionsPromise<string>Add a single job to the queue
batchEnqueuetype: JobType, items: Array<{payload, options?}>Promise<string[]>Add multiple jobs as a batch
getJobjobId: string`Promise<Job \null>`Retrieve job by ID
startJobjobId: stringPromise<void>Mark job as started/processing
completeJobjobId: string, output: unknownPromise<void>Mark job as completed with output
markJobFailedjobId: string, error: stringPromise<void>Mark job as failed with error
cancelJobjobId: stringPromise<void>Request job cancellation

Sources: apps/sim/lib/core/async-jobs/types.ts:1-30

Job Configuration Options

Jobs can be configured with the following options:

OptionTypeDescription
metadataobjectAdditional metadata including workflow ID, workspace ID, and correlation data
concurrencyKeystringKey for per-key concurrency limiting
concurrencyLimitnumberMaximum concurrent jobs for this key (database backend only)
tagsstring[]Tags for categorization (e.g., tableId:xxx, rowId:yyy)
runnerfunctionCustom job body for database backend when no external worker exists

Sources: apps/sim/lib/core/async-jobs/types.ts:55-75

Job Types

The platform defines several job types for different processing scenarios:

export type JobType = 
  | 'workflow'
  | 'workflow-group-cell'
  | 'schedule'
  | 'webhook'
  | 'knowledge-connector-sync'
  | 'resume'

Workflow Execution (`workflow`)

The core job type for executing workflows. Handles the full lifecycle from triggering to completion.

Handler: executeWorkflowJob

Payload includes:

  • workflowId - Target workflow identifier
  • workspaceId - Workspace containing the workflow
  • input - Input data for the workflow
  • executionId - Unique execution identifier
  • source - Trigger source (e.g., 'api', 'table', 'schedule')

Sources: apps/sim/background/workflow-execution.ts

Workflow Group Cell (`workflow-group-cell`)

Executes workflow groups for table rows. Supports high-concurrency table-based workflow execution.

Handler: executeWorkflowGroupCellJob

Key Features:

  • Table concurrency limiting (TABLE_CONCURRENCY_LIMIT)
  • Per-row execution tracking
  • Correlation with table and row identifiers
sequenceDiagram
    participant Table as Table Scheduler
    participant Queue as Job Queue
    participant Worker as Cell Worker
    
    Table->>Queue: batchEnqueue(workflow-group-cell, runs[])
    Queue-->>Table: jobIds[]
    Worker->>Queue: getJob(jobId)
    Worker->>Worker: executeWorkflowGroupCellJob(payload)
    Worker->>Queue: completeJob(jobId, output)

Sources: apps/sim/lib/table/workflow-columns.ts:30-60

Schedule Execution (`schedule`)

Handles time-based workflow triggers defined by schedules.

Handler: executeScheduleJob

Sources: apps/sim/background/schedule-execution.ts

Webhook Execution (`webhook`)

Processes incoming webhook payloads and triggers associated workflows.

Handler: executeWebhookJob

Sources: apps/sim/background/webhook-execution.ts

Knowledge Connector Sync (`knowledge-connector-sync`)

Synchronizes data between external knowledge sources and the platform.

Handler: executeKnowledgeConnectorSyncJob

Sources: apps/sim/background/knowledge-connector-sync.ts

Resume Execution (`resume`)

Resumes previously paused or checkpointed workflow executions.

Handler: executeResumeJob

Sources: apps/sim/background/resume-execution.ts

Concurrency Control

Table Concurrency

For table-based workflow execution, the system enforces a concurrency limit to prevent resource exhaustion:

const TABLE_CONCURRENCY_LIMIT = 5

Jobs for the same table are grouped by concurrencyKey to ensure ordered processing while allowing parallel execution across different tables.

Sources: apps/sim/lib/table/workflow-columns.ts:50

Job Tagging

Jobs are tagged for tracking and monitoring:

Tag FormatExamplePurpose
tableId:{id}tableId:abc123Identifies the source table
rowId:{id}rowId:row456Identifies the source row
group:{id}group:grp789Identifies the workflow group

Sources: apps/sim/lib/table/workflow-columns.ts:55

Job Correlation and Tracing

Metadata Structure

Each job carries correlation metadata for distributed tracing:

interface CorrelationData {
  executionId: string
  requestId: string
  source: 'workflow' | 'api' | 'schedule' | 'webhook' | 'table'
  workflowId: string
  triggerType: string
}

Request ID Format

Request IDs follow a consistent naming convention based on job type:

Job TypeRequest ID FormatExample
Workflow Group Cellwfgrp-{executionId}wfgrp-exec-123

Sources: apps/sim/lib/table/workflow-columns.ts:43

Job Lifecycle

State Transitions

stateDiagram-v2
    [*] --> Queued: enqueue()
    Queued --> Processing: startJob()
    Processing --> Completed: completeJob()
    Processing --> Failed: markJobFailed()
    Processing --> Cancelled: cancelJob()
    Queued --> Cancelled: cancelJob()
    Cancelled --> [*]
    Completed --> [*]
    Failed --> [*]

Status Definitions

StatusDescription
pendingJob is queued but not yet picked up
queuedJob is in the queue (alternative state)
processingJob is currently being executed
completedJob finished successfully
failedJob encountered an error
cancelledJob was cancelled before completion

Error Handling and Cancellation

Best-Effort Cancellation

The cancelJob method implements best-effort cancellation:

  • Unknown or already-completed jobs resolve quietly (no error thrown)
  • Underlying provider rejections fail loudly to alert operators
/**
 * Request cancellation of a queued or running job. Best-effort: backends should
 * fail loudly if the underlying provider rejects, but a missing/unknown jobId
 * should resolve quietly so callers can drive cancel from possibly-stale state.
 */
cancelJob(jobId: string): Promise<void>

Sources: apps/sim/lib/core/async-jobs/types.ts:28-33

Runner Functions

For the database backend, jobs include a runner function that is executed as a fire-and-forget IIFE (Immediately Invoked Function Expression). This allows the database row to drive the job through processing states:

runner?: <TPayload>(
  payload: TPayload, 
  signal: AbortSignal
) => Promise<void>

The AbortSignal is driven by cancelJob, enabling graceful shutdown of cancelled jobs.

Sources: apps/sim/lib/core/async-jobs/types.ts:62-69

Batch Enqueue Operations

Batch Processing Flow

graph LR
    A[Pending Runs] --> B[Map to Job Items]
    B --> C{Backend Type}
    C -->|Database| D[Single Multi-Row INSERT]
    C -->|Trigger.dev| E[tasks.batchTrigger]
    D --> F[Return jobIds in input order]
    E --> F
    F --> G[Promise.allSettled Fallback]
    G -->|If batch fails| H[Individual Enqueue]

The batch enqueue operation:

  1. Maps pending runs to job items with full metadata and options
  2. Attempts batch enqueue via the queue backend
  3. Falls back to individual enqueue if batch fails
  4. Returns one jobId per item in input order

Sources: apps/sim/lib/table/workflow-columns.ts:60-75

Integration with SDKs

Python SDK Async Execution

The Python SDK provides async execution support through the execute_workflow method with async_execution=True:

result = client.execute_workflow(
    'workflow-id',
    {'message': 'Hello'},
    async_execution=True
)
# Returns AsyncExecutionResult with job_id and status_url

TypeScript SDK Async Execution

Similarly, the TypeScript SDK supports async execution:

const result = await client.executeWorkflow('workflow-id', { data: 'input' }, {
  asyncExecution: true
});
// Returns AsyncExecutionResult with jobId

Job status can be monitored via getJobStatus(jobId).

Sources: packages/python-sdk/README.md, packages/ts-sdk/README.md

Testing Support

The testing utilities in packages/testing provide factories for creating workflow test fixtures:

  • createWorkflowState() - Base workflow state
  • createLinearWorkflow(n) - Sequential workflow with n blocks
  • createBranchingWorkflow() - Conditional branching workflow

Sources: packages/testing/src/factories/workflow.factory.ts

Summary

The background job system in Sim provides:

  1. Unified Queue Interface - Consistent API across different job types and backends
  2. Multiple Backend Support - Database for self-hosted, Trigger.dev for cloud deployments
  3. Rich Job Metadata - Correlation data, tags, and concurrency controls for observability
  4. Reliable Execution - State management, cancellation support, and retry capabilities
  5. Batch Operations - Efficient bulk enqueue with fallback to individual operations
  6. SDK Integration - Async execution support in both Python and TypeScript SDKs

Sources: [apps/sim/lib/core/async-jobs/types.ts:40]()

Doramagic Pitfall Log

Source-linked risks stay visible on the manual page so the preview does not read like a recommendation.

medium Open-source general purpose agent with built-in MCPToolkit support

First-time setup may fail or require extra isolation and rollback planning.

medium Configuration risk needs validation

Users may get misleading failures or incomplete behavior unless configuration is checked carefully.

medium README/documentation is current enough for a first validation pass.

The project should not be treated as fully validated until this signal is reviewed.

medium v0.6.63

The project should not be treated as fully validated until this signal is reviewed.

Doramagic Pitfall Log

Doramagic extracted 16 source-linked risk signals. Review them before installing or handing real data to the project.

1. Installation risk: Open-source general purpose agent with built-in MCPToolkit support

  • Severity: medium
  • Finding: Open-source general purpose agent with built-in MCPToolkit support 15 May 2025 · ... MCP for local-agent workflows · r/LocalLLaMA - A visual ... r/commandline - CLI tool to simplify open source monitoring agent installation.
  • User impact: First-time setup may fail or require extra isolation and rollback planning.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: social_signal:reddit | ssig_7a250aac9fa1441c8186a7b73d669d8f | https://www.reddit.com/r/LocalLLaMA/comments/1kn8m8t/opensource_general_purpose_agent_with_builtin/ | Open-source general purpose agent with built-in MCPToolkit support

2. Configuration risk: Configuration risk needs validation

  • Severity: medium
  • Finding: Configuration risk is backed by a source signal: Configuration risk needs validation. Treat it as a review item until the current version is checked.
  • User impact: Users may get misleading failures or incomplete behavior unless configuration is checked carefully.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: capability.host_targets | art_4c769793e2db44b09c3ad8f55672a2ea | https://github.com/simstudioai/sim#readme | host_targets=cursor

3. Capability assumption: README/documentation is current enough for a first validation pass.

  • Severity: medium
  • Finding: README/documentation is current enough for a first validation pass.
  • User impact: The project should not be treated as fully validated until this signal is reviewed.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: capability.assumptions | art_4c769793e2db44b09c3ad8f55672a2ea | https://github.com/simstudioai/sim#readme | README/documentation is current enough for a first validation pass.

4. Project risk: v0.6.63

  • Severity: medium
  • Finding: Project risk is backed by a source signal: v0.6.63. Treat it as a review item until the current version is checked.
  • User impact: The project should not be treated as fully validated until this signal is reviewed.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: Source-linked evidence: https://github.com/simstudioai/sim/releases/tag/v0.6.63

5. Project risk: v0.6.65

  • Severity: medium
  • Finding: Project risk is backed by a source signal: v0.6.65. Treat it as a review item until the current version is checked.
  • User impact: The project should not be treated as fully validated until this signal is reviewed.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: Source-linked evidence: https://github.com/simstudioai/sim/releases/tag/v0.6.65

6. Project risk: v0.6.67

  • Severity: medium
  • Finding: Project risk is backed by a source signal: v0.6.67. Treat it as a review item until the current version is checked.
  • User impact: The project should not be treated as fully validated until this signal is reviewed.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: Source-linked evidence: https://github.com/simstudioai/sim/releases/tag/v0.6.67

7. Project risk: v0.6.73

  • Severity: medium
  • Finding: Project risk is backed by a source signal: v0.6.73. Treat it as a review item until the current version is checked.
  • User impact: The project should not be treated as fully validated until this signal is reviewed.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: Source-linked evidence: https://github.com/simstudioai/sim/releases/tag/v0.6.73

8. Maintenance risk: v0.6.71

  • Severity: medium
  • Finding: Maintenance risk is backed by a source signal: v0.6.71. Treat it as a review item until the current version is checked.
  • User impact: Users cannot judge support quality until recent activity, releases, and issue response are checked.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: Source-linked evidence: https://github.com/simstudioai/sim/releases/tag/v0.6.71

9. Maintenance risk: Maintainer activity is unknown

  • Severity: medium
  • Finding: Maintenance risk is backed by a source signal: Maintainer activity is unknown. Treat it as a review item until the current version is checked.
  • User impact: Users cannot judge support quality until recent activity, releases, and issue response are checked.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: evidence.maintainer_signals | art_4c769793e2db44b09c3ad8f55672a2ea | https://github.com/simstudioai/sim#readme | last_activity_observed missing

10. Security or permission risk: no_demo

  • Severity: medium
  • Finding: no_demo
  • User impact: The project may affect permissions, credentials, data exposure, or host boundaries.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: downstream_validation.risk_items | art_4c769793e2db44b09c3ad8f55672a2ea | https://github.com/simstudioai/sim#readme | no_demo; severity=medium

11. Security or permission risk: No sandbox install has been executed yet; downstream must verify before user use.

  • Severity: medium
  • Finding: No sandbox install has been executed yet; downstream must verify before user use.
  • User impact: The project may affect permissions, credentials, data exposure, or host boundaries.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: risks.safety_notes | art_4c769793e2db44b09c3ad8f55672a2ea | https://github.com/simstudioai/sim#readme | No sandbox install has been executed yet; downstream must verify before user use.

12. Security or permission risk: no_demo

  • Severity: medium
  • Finding: no_demo
  • User impact: The project may affect permissions, credentials, data exposure, or host boundaries.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: risks.scoring_risks | art_4c769793e2db44b09c3ad8f55672a2ea | https://github.com/simstudioai/sim#readme | no_demo; severity=medium

Source: Doramagic discovery, validation, and Project Pack records

Community Discussion Evidence

These external discussion links are review inputs, not standalone proof that the project is production-ready.

Sources 12

Count of project-level external discussion links exposed on this manual page.

Use Review before install

Open the linked issues or discussions before treating the pack as ready for your environment.

Community Discussion Evidence

Doramagic exposes project-level community discussion separately from official documentation. Review these links before using sim with real data or production workflows.

Source: Project Pack community evidence and pitfall evidence