Doramagic Project Pack · Human Manual

ComfyUI

ComfyUI is a powerful, modular AI creation engine designed for visual professionals who demand precise control over every model, parameter, and output. It provides a node graph-based inter...

Introduction to ComfyUI

Related topics: Installation Guide, System Architecture

Section Related Pages

Continue reading this section for the full explanation and source context.

Section Key Characteristics

Continue reading this section for the full explanation and source context.

Section Model Support

Continue reading this section for the full explanation and source context.

Section Advanced Capabilities

Continue reading this section for the full explanation and source context.

Related topics: Installation Guide, System Architecture

Introduction to ComfyUI

Overview

ComfyUI is a powerful, modular AI creation engine designed for visual professionals who demand precise control over every model, parameter, and output. It provides a node graph-based interface that enables users to generate images, videos, 3D models, audio, and other AI-driven content with granular control over the entire generation pipeline.

Sources: README.md

Key Characteristics

CharacteristicDescription
TypeAI Generation Engine
InterfaceNode Graph / Visual Programming
LicenseOpen Source
PlatformsWindows, Linux, macOS, Cloud
GPU SupportNVIDIA, AMD (ROCm), Intel, Apple Silicon, Ascend, Iluvatar

ComfyUI natively supports the latest open-source state-of-the-art models and provides API nodes for accessing closed-source models such as Seedance, Hunyuan3D, and others through the online Comfy API.

Sources: README.md

Core Features

Model Support

ComfyUI provides extensive support for various AI model types:

Model CategoryExamplesDocumentation Link
Stable DiffusionSD 1.x, SD 2.x, SDXL, SD 3.xExamples
ControlNet/T2I-AdapterVarious preprocessorsControlNet Guide
LoRA/LyCORISRegular, locon, loha variantsLoRA Guide
Upscaling ModelsESRGAN, SwinIR, Swin2SRUpscale Guide
Latent ModelsLCM models and LorasLCM Guide

Sources: README.md

Advanced Capabilities

  • Textual Inversion & Hypernetworks: Advanced embedding techniques for custom styling
  • Area Composition: Multi-region generation with precise control
  • Inpainting: Both regular and inpainting-specific models supported
  • Model Merging: Combine multiple models for unique outputs
  • Latent Previews: Real-time preview with TAESD for high-quality previews
  • Workflow Export: Save/load workflows as JSON, embed in PNG/WebP/FLAC metadata
  • Offline Operation: Core functionality works completely offline

Sources: README.md

System Architecture

High-Level Architecture

graph TD
    subgraph "Frontend Layer"
        UI[User Interface]
        WS[WebSocket Handler]
    end
    
    subgraph "API Layer"
        REST[REST API Routes]
        INT[Internal Routes]
    end
    
    subgraph "Core Execution Engine"
        SG[Scheduling Graph]
        EX[Execution Engine]
        NODE[Node Registry]
    end
    
    subgraph "Model Management"
        MM[Model Manager]
        LM[Loader Manager]
    end
    
    subgraph "Backend Services"
        UM[User Manager]
        FM[Frontend Manager]
    end
    
    UI <--> WS
    WS <--> REST
    REST <--> INT
    REST <--> SG
    SG <--> EX
    EX <--> NODE
    MM <--> LM
    UM <--> FM
    
    style UI fill:#e1f5fe
    style EX fill:#fff3e0
    style MM fill:#e8f5e9

Node Type System

ComfyUI uses a typed node system for type-safe workflow construction. The comfy_types module provides abstract base classes and type hints:

classDiagram
    class ComfyNodeABC {
        <<abstract>>
        +INPUT_TYPES() InputTypeDict
        +FUNCTION() str
        +OUTPUT_NODE() bool
        +CATEGORY() str
        +RETURN_TYPES() tuple
    }
    
    class CheckLazyMixin {
        <<mixin>>
    }
    
    class IO {
        <<enum>>
        +ANY: "*"
        +NUMBER: "FLOAT,INT"
        +PRIMITIVE: "STRING,FLOAT,INT,BOOLEAN"
    }
    
    ComfyNodeABC <-- CheckLazyMixin
    ComfyNodeABC ..> IO : uses

Sources: comfy/comfy_types/README.md

Execution Model

ComfyUI employs a smart execution model that optimizes workflow processing:

graph LR
    A[Submit Workflow] --> B{Changed?}
    B -->|First Run| C[Execute All Valid Paths]
    B -->|Unchanged| D[Skip Execution]
    B -->|Partial Change| E[Execute Changed + Dependencies]
    C --> F[Output Results]
    E --> F
    D --> F

Execution Rules:

  • Only parts of the graph with all correct inputs will be executed
  • Only parts that change between executions are re-run
  • Submitting the same graph twice executes only the first instance

Sources: README.md

Installation

Supported Platforms

PlatformGPU OptionsInstallation Type
WindowsNVIDIA, AMD, Intel, CPUPortable Package, Manual Install
LinuxNVIDIA, AMD (ROCm), Intel, CPUManual Install
macOSApple Silicon (M1/M2), CPUManual Install
CloudComfy CloudDesktop Application

Sources: README.md

Quick Start Commands

# Windows/Linux Manual Installation
pip install -r requirements.txt
python main.py

# NVIDIA GPU (Stable)
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu130

# NVIDIA GPU (Nightly)
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu132

# AMD GPU (ROCm)
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm6.1

# Apple Silicon
# Install PyTorch nightly per Apple Developer Guide
pip install -r requirements.txt

ComfyUI-Manager Setup

ComfyUI-Manager provides extension management capabilities:

# Install dependencies
pip install -r manager_requirements.txt

# Enable with flags
python main.py --enable-manager
Manager FlagDescription
--enable-managerEnable ComfyUI-Manager
--enable-manager-legacy-uiUse legacy manager UI
--disable-manager-uiKeep background features only

Sources: README.md

User Interface

Keyboard Shortcuts

ShortcutAction
Ctrl+Z / Ctrl+YUndo/Redo
Ctrl+SSave workflow
Ctrl+OLoad workflow
Ctrl+ASelect all nodes
Alt+CCollapse/uncollapse selected
Ctrl+MMute/unmute selected
Ctrl+BBypass selected (reconnect wires)
Delete/BackspaceDelete selected nodes
Space + DragPan canvas
Ctrl+Click / Shift+ClickAdd to selection
Ctrl+C / Ctrl+VCopy/paste nodes
Ctrl+Shift+VPaste with connections
Shift+DragMove multiple nodes
Ctrl+DLoad default graph
Alt++ / Alt+-Zoom in/out
PPin/unpin nodes
Ctrl+GGroup selected
Double-ClickOpen node search palette

Sources: README.md

Preview Methods

ComfyUI supports multiple preview rendering methods:

MethodQualityPerformanceSetup
autoVariableVariableDefault
taesdHighFastDownload TAESD decoder models

To enable high-quality previews with TAESD:

  1. Download decoder files to models/vae_approx folder:

``bash python main.py --preview-method taesd ``

  1. Launch with preview flag:

Sources: README.md

API and Integration

API Structure

ComfyUI provides a comprehensive REST API for external integrations:

graph TD
    EXT[External Application] -->|HTTP/REST| API[API Server]
    API -->|v2/userdata| UM[User Data Management]
    API -->|v2/modelinfo| MM[Model Info]
    API -->|v2/history| H[Execution History]
    EXT -->|WebSocket| WS[WebSocket Connection]
    WS -->|Real-time| STATUS[Execution Status]

Internal Routes

All routes under /internal are designated for internal ComfyUI use only. These routes may change at any time without notice and are not intended for external application use.

Sources: api_server/routes/internal/README.md

User Data API

The user data management system provides secure file operations:

EndpointMethodDescription
/v2/userdataGETList directory contents
/v2/userdata/{path}POSTUpload file
/v2/userdata/{file}DELETEDelete file
/v2/userdata/{file}/move/{dest}POSTMove/rename file

Query Parameters for Listing:

  • path: Relative path within user's data directory
  • recurse: Enable recursive directory listing
  • full_info: Return detailed file information
  • split: Return path as array split by /

Sources: app/user_manager.py

Model Discovery

The model manager provides intelligent model discovery with metadata extraction:

graph TD
    A[Model Path] --> B{Extension Check}
    B -->|.safetensors| C[Extract Metadata]
    B -->|.preview| D[Add Preview Image]
    B -->|Other| E[Standard Add]
    C --> F[Parse ssmd_cover_images]
    D --> R[Result List]
    E --> R
    F --> R

The system extracts preview images embedded in SafeTensors metadata under the ssmd_cover_images key.

Sources: app/model_manager.py

Frontend Management

Version Control

ComfyUI supports flexible frontend version management:

graph LR
    A[Default Frontend] --> B[Specific Version]
    A --> C[Latest/Daily]
    A --> D[Legacy Frontend]
    
    B -.->|v1.2.2| E[Stable]
    C -.->|daily| F[Cutting Edge]
    D -.->|legacy| G[Compatibility]
Version StringDescription
Comfy-Org/[email protected]Specific stable version
Comfy-Org/ComfyUI_frontend@latestLatest release
Comfy-Org/ComfyUI_frontend@prereleasePre-release build

Version Pattern:

^([a-zA-Z0-9][a-zA-Z0-9-]{0,38})/([a-zA-Z0-9_.-]+)@(v?\d+\.\d+\.\d+[-._a-zA-Z0-9]*|latest|prerelease)$

Sources: app/frontend_management.py

Custom Frontends

Frontends are stored in a configurable directory structure:

CUSTOM_FRONTENDS_ROOT/
├── Comfy-Org_ComfyUI_frontend/
│   ├── v1.2.2/
│   ├── v1.3.0/
│   └── latest/
└── custom_provider_custom_frontend/
    └── v2.0.0/

The system supports embedding custom documentation and workflow templates through separate pip packages (comfyui-embedded-docs, comfyui-workflow-templates).

Sources: app/frontend_management.py

Security Features

TLS/SSL Support

ComfyUI supports HTTPS for secure connections:

# Generate self-signed certificate
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem \
    -sha256 -days 3650 -nodes \
    -subj "/C=XX/ST=StateName/L=CityName/O=CompanyName/OU=CompanySectionName/CN=CommonNameOrHostname"

# Launch with TLS
python main.py --tls-keyfile key.pem --tls-certfile cert.pem
Note: Self-signed certificates are not appropriate for shared or production environments.

Sources: README.md

Manager Security

The --disable-manager-ui flag allows keeping security checks and scheduled installation completion while disabling the manager UI and endpoints.

Sources: README.md

Release Process

ComfyUI follows a structured release cycle:

graph TD
    A[Commit to Repository] --> B{Which Branch?}
    B -->|Master| C[Weekly Release Candidate]
    B -->|Stable Tag| D[Backport Fixes]
    C --> E[Major Version v0.X.Y]
    D --> F[Patch Version v0.4.X]
    
    E -.->|~2 weeks| G[Next Major]
    F -.->|as needed| H[Stable Update]
Release TypeFrequencyTarget
Major Version~2 weeksMonday (variable)
Patch VersionAs neededStable branch backports
Nightly CommitsOngoingMaster branch (unstable)
Warning: Commits outside stable release tags may be very unstable and break custom nodes.

Sources: README.md

See Also

Sources: [README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

Installation Guide

Related topics: Introduction to ComfyUI

Section Related Pages

Continue reading this section for the full explanation and source context.

Section System Requirements

Continue reading this section for the full explanation and source context.

Section GPU Support Matrix

Continue reading this section for the full explanation and source context.

Section NVIDIA GPUs

Continue reading this section for the full explanation and source context.

Related topics: Introduction to ComfyUI

Installation Guide

Overview

This guide covers all supported methods for installing ComfyUI, including local installations on Windows, Linux, and macOS, as well as platform-specific considerations for NVIDIA, AMD, Intel, and Apple Silicon GPUs. ComfyUI is designed to be modular and works fully offline—the core will never download anything unless explicitly requested by the user.

Sources: README.md:1-50

Installation Methods Overview

ComfyUI supports multiple installation approaches to accommodate different user needs and technical expertise levels.

graph TD
    A[ComfyUI Installation] --> B[Desktop Application]
    A --> C[Windows Portable Package]
    A --> D[Manual Installation]
    
    D --> E[Windows]
    D --> F[Linux]
    D --> G[macOS]
    
    E --> H[NVIDIA GPU]
    E --> I[AMD GPU]
    E --> J[Intel GPU]
    
    F --> K[NVIDIA GPU]
    F --> L[AMD ROCm]
    F --> M[Intel XPU]
    
    G --> N[Apple Silicon M1/M2]

Prerequisites

System Requirements

ComponentMinimumRecommended
GPU VRAM4GB8GB+
RAM8GB16GB+
Disk Space10GB20GB+
OSWindows 10, Linux, macOSWindows 11, Latest Linux/macOS

GPU Support Matrix

GPU VendorSupport LevelBackend
NVIDIAFullCUDA (cu130/cu132)
AMDFull (ROCm)ROCm
IntelFull (XPU)oneAPI
Apple SiliconFullMetal/MPS

Sources: README.md:200-280

PyTorch Installation

PyTorch is the core dependency required for ComfyUI. The installation command varies by hardware platform.

NVIDIA GPUs

For stable PyTorch with CUDA support:

pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu130

For nightly builds with potential performance improvements:

pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu132

Sources: README.md:180-195

AMD GPUs (ROCm)

For AMD GPUs using ROCm, install the ROCm-compatible PyTorch build:

pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm

For experimental memory-efficient attention on recent PyTorch with AMD GPUs:

TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL=1 python main.py --use-pytorch-cross-attention

For non-officially supported AMD cards, use environment variable overrides:

GPU SeriesCommand
AMD 6700, 6600 (RDNA2)HSA_OVERRIDE_GFX_VERSION=10.3.0 python main.py
AMD 7600 (RDNA3)HSA_OVERRIDE_GFX_VERSION=11.0.0 python main.py

Additional performance tuning options:

PYTORCH_TUNABLEOP_ENABLED=1 python main.py

Sources: README.md:220-260

Intel GPUs (XPU)

For Intel discrete GPUs and APUs using the XPU backend:

pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/xpu

For nightly builds with potential improvements:

pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/xpu

Sources: README.md:160-178

Apple Silicon (M1/M2)

  1. Install the latest PyTorch nightly following Apple's Accelerated PyTorch training on Mac developer guide.
  2. Follow the manual installation instructions for your operating system.
  3. Install ComfyUI dependencies as specified in the Dependencies section.

Sources: README.md:290-310

Troubleshooting PyTorch

If you encounter the error "Torch not compiled with CUDA enabled":

pip uninstall torch

Then reinstall using the appropriate command for your hardware from the sections above.

Sources: README.md:196-199

Dependencies Installation

After installing PyTorch, install the core ComfyUI dependencies:

pip install -r requirements.txt

This installs all required Python packages for ComfyUI to function properly. After this step, ComfyUI should be ready to run.

Sources: README.md:286-288

Windows Portable Package

For Windows users seeking a portable, self-contained installation:

  1. Download the portable standalone build from the releases page.
  2. Extract the archive to your desired location.
  3. Run python main.py or the provided executable.

This package includes everything needed to run ComfyUI on NVIDIA GPUs or in CPU-only mode.

Sources: README.md:95-110

Manual Installation

Windows and Linux

graph LR
    A[Download/Clone Repository] --> B[Install PyTorch]
    B --> C[Install Dependencies]
    C --> D[Configure Model Paths]
    D --> E[Launch ComfyUI]

#### Step 1: Clone or Download the Repository

git clone https://github.com/Comfy-Org/ComfyUI.git
cd ComfyUI

#### Step 2: Install PyTorch

Follow the PyTorch installation instructions for your GPU in the PyTorch Installation section above.

#### Step 3: Install Dependencies

pip install -r requirements.txt

#### Step 4: Launch

python main.py

Sources: README.md:280-295

Model Path Configuration

ComfyUI supports an optional configuration file to set custom search paths for models, useful if you have models stored in a different location or shared across multiple installations.

Copy the example configuration:

cp extra_model_paths.yaml.example extra_model_paths.yaml

Edit extra_model_paths.yaml to specify your model directories:

# Example extra_model_paths.yaml
models:
  checkpoints: /path/to/your/checkpoints
  loras: /path/to/your/loras
  vae: /path/to/your/vae

Sources: extra_model_paths.yaml.example()

ComfyUI-Manager

ComfyUI-Manager is an extension that simplifies installation, updating, and management of custom nodes.

Installation

  1. Navigate to your ComfyUI installation directory
  2. Clone the ComfyUI-Manager repository into the custom_nodes folder:
cd custom_nodes
git clone https://github.com/Comfy-Org/ComfyUI-Manager.git
  1. Install manager dependencies:
pip install -r manager_requirements.txt

Sources: README.md:330-345

Enabling ComfyUI-Manager

Start ComfyUI with the --enable-manager flag:

python main.py --enable-manager

Manager Command Line Options

FlagDescription
--enable-managerEnable ComfyUI-Manager
--enable-manager-legacy-uiUse the legacy manager UI (requires --enable-manager)
--disable-manager-uiDisable manager UI while keeping background features (requires --enable-manager)

Sources: README.md:346-365

Desktop Application

For the easiest getting-started experience, download the official Desktop Application:

This method requires no technical configuration and is recommended for new users.

Sources: README.md:55-65

Cloud Deployment

For users without local hardware, ComfyUI is available on Comfy Cloud:

  • Official paid cloud version hosted at comfy.org/cloud
  • No local hardware required
  • Full ComfyUI functionality

Sources: README.md:66-70

Advanced Configuration

Multi-User Setup

For server deployments with multiple users, enable multi-user mode:

python main.py --multi-user

This enables server-side user profile storage instead of browser-based storage.

Sources: app/user_manager.py:25-35

Frontend Version Management

ComfyUI ships its frontend as a separate pip package. To specify a frontend version:

python main.py --front-end-version Comfy-Org/ComfyUI_frontend@latest

For stable releases:

python main.py --front-end-version Comfy-Org/[email protected]

For legacy frontend:

python main.py --front-end-version Comfy-Org/ComfyUI_legacy_frontend@latest

Sources: app/frontend_management.py:40-75

Additional Command Line Arguments

ArgumentDescription
--preview-method autoEnable previews with automatic method selection
--preview-method taesdUse TAESD for high-quality previews
--tls-keyfile <file>Path to TLS private key
--tls-certfile <file>Path to TLS certificate
--use-pytorch-cross-attentionUse PyTorch cross-attention implementation
--disable-api-nodesDisable optional API nodes

Sources: README.md:15-45

Post-Installation Verification

After installation, verify your setup by:

  1. Launching ComfyUI: python main.py
  2. Opening the web interface (typically http://localhost:8188)
  3. Running a simple workflow to confirm GPU acceleration is working

If previews are enabled, you should see latent preview updates during image generation, confirming the installation is functioning correctly.

Sources: README.md:10-20

Common Issues

IssueSolution
"Torch not compiled with CUDA enabled"Reinstall PyTorch with CUDA support
Import errorsRun pip install -r requirements.txt
Model not foundConfigure extra_model_paths.yaml or check model paths
Manager installation failsEnsure manager_requirements.txt dependencies are installed

Sources: README.md:196-199

Sources: [README.md:1-50]()

System Architecture

Related topics: Server System, Execution Engine, Model Loading and Detection

Section Related Pages

Continue reading this section for the full explanation and source context.

Section Execution Engine

Continue reading this section for the full explanation and source context.

Section User Manager

Continue reading this section for the full explanation and source context.

Section Model Manager

Continue reading this section for the full explanation and source context.

Related topics: Server System, Execution Engine, Model Loading and Detection

System Architecture

Overview

ComfyUI is a modular AI creation engine designed with a node-graph architecture that enables complex workflow orchestration for generative AI models. The system architecture follows a client-server model where the backend provides REST API endpoints for workflow execution, model management, and user administration, while the frontend communicates via WebSocket and HTTP protocols to render the visual node editor and manage execution state.

Sources: README.md

High-Level Architecture

graph TD
    subgraph Client
        Frontend["Web Frontend<br/>(React-based)"]
    end
    
    subgraph Server["ComfyUI Server"]
        API["REST API Routes"]
        WS["WebSocket Handler"]
        Execution["Execution Engine"]
        UserMgr["User Manager"]
        ModelMgr["Model Manager"]
    end
    
    subgraph Storage
        Models["Model Files"]
        Settings["User Settings"]
        Cache["File Cache"]
    end
    
    Frontend <-->|HTTP/WS| API
    Frontend <-->|WS| WS
    API <--> UserMgr
    API <--> ModelMgr
    Execution <--> Models
    UserMgr <--> Settings
    ModelMgr <--> Cache

Core Components

Execution Engine

The execution engine is the computational core of ComfyUI, responsible for processing node graphs in topological order. It implements intelligent caching where only parts of the graph that have changed between executions are re-processed.

Key Characteristics:

  • Only parts of the graph that have an output with all the correct inputs will be executed
  • Only parts of the graph that change from each execution to the next will be executed
  • If the same graph is submitted twice, only the first will be executed
  • If the last part of the graph changes, only that part and its dependents are re-executed

Sources: README.md

User Manager

The UserManager class handles multi-user support and user-specific settings storage.

classDiagram
    class UserManager {
        +settings: AppSettings
        +users: dict
        +__init__()
        +get_users_file(): str
    }
    
    class AppSettings {
        +__init__(user_manager)
        +get_default_user(): str
    }

User Configuration:

ParameterDescriptionDefault
multi_userEnable multiple user profilesFalse
User DirectoryLocation for user-specific datafolder_paths.get_user_directory()

Initialization Logic:

# Single-user mode (default)
self.users = {"default": "default"}

# Multi-user mode (with --multi-user flag)
if os.path.isfile(self.get_users_file()):
    with open(self.get_users_file()) as f:
        self.users = json.load(f)

Sources: app/user_manager.py:1-50

Model Manager

The ModelFileManager class provides centralized model file discovery and caching.

graph LR
    A[Model Request] --> B[Cache Check]
    B -->|Hit| C[Return Cached]
    B -->|Miss| D[Scan Directories]
    D --> E[Build File List]
    E --> F[Cache Result]
    F --> C

Cache Data Structure:

FieldTypeDescription
keystrCache identifier
valuetuple[list[dict], dict[str, float], float]Models list, metadata, timestamp

Model Discovery Features:

  • Recursive directory scanning with glob patterns
  • Safe file filtering by extension and content type
  • Support for safetensors metadata extraction
  • Preview image detection (*.preview files)

Sources: app/model_manager.py:1-80

Frontend Management

The FrontendManagement class handles frontend version control and installation verification.

Version Parsing Pattern:

{provider}/{repo}@{version}

Example: Comfy-Org/[email protected]

Validation Regex:

^([a-zA-Z0-9][a-zA-Z0-9-]{0,38})/([a-zA-Z0-9_.-]+)@(v?\d+\.\d+\.\d+[-._a-zA-Z0-9]*|latest|prerelease)$

Package Discovery:

Package TypePurpose
comfyui-frontend-packageMain frontend assets
comfyui-workflow-templatesWorkflow template files
comfyui-embedded-docsEmbedded documentation

Sources: app/frontend_management.py:1-100

API Routes Architecture

REST Endpoints

graph TD
    R1["GET /v2/userdata"] --> UM[UserManager]
    R2["GET /experiment/models"] --> MM[ModelFileManager]
    R3["GET /experiment/models/{folder}"] --> MM

File Listing Parameters:

ParameterTypeDescription
pathstrRelative path within data directory
recurseboolEnable recursive directory traversal
full_infoboolReturn full file metadata
splitboolReturn path as array (split by /)

Response Format:

class FileInfo(TypedDict):
    path: str      # Relative file path
    size: int      # File size in bytes
    modified: int  # Modification time (milliseconds)
    created: int   # Creation time (milliseconds)

Sources: app/user_manager.py:60-100

Type System Architecture

ComfyUI implements a comprehensive type hinting system for node development.

classDiagram
    class ComfyNodeABC {
        <<abstract>>
        +INPUT_TYPES: InputTypeDict
    }
    
    class IO {
        <<enumeration>>
        ANY = "*"
        NUMBER = "FLOAT,INT"
        PRIMITIVE = "STRING,FLOAT,INT,BOOLEAN"
    }
    
    ComfyNodeABC --> IO

Built-in IO Types:

TypeValueDescription
ANY"*"Accepts any input type
NUMBER"FLOAT,INT"Numeric values
PRIMITIVE"STRING,FLOAT,INT,BOOLEAN"Basic data types

Sources: comfy/comfy_types/README.md

Configuration and CLI Arguments

Command Line Options

FlagDescription
--enable-managerEnable ComfyUI-Manager
--enable-manager-legacy-uiUse legacy manager UI
--disable-manager-uiDisable manager UI (keep background features)
--disable-api-nodesDisable optional API nodes
--preview-method {auto,taesd}Preview generation method
--front-end-versionSpecify frontend version

Sources: README.md

Environment Variables

VariablePurposeExample
HSA_OVERRIDE_GFX_VERSIONAMD GPU compatibility10.3.0 for RDNA2
TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTALROCm memory optimization1
PYTORCH_TUNABLEOP_ENABLEDPyTorch tuning1

Sources: README.md

Data Flow: Workflow Execution

sequenceDiagram
    participant Client
    participant API
    participant Execution
    participant Cache
    participant Models
    
    Client->>API: Submit Workflow Graph
    API->>Execution: Parse Graph
    Execution->>Cache: Check Node States
    Cache-->>Execution: Cached Results
    Execution->>Models: Load Required Models
    Models-->>Execution: Model Data
    Execution->>Execution: Topological Sort
    Execution->>Execution: Execute Changed Nodes
    Execution-->>API: Output Results
    API-->>Client: WebSocket Update

Node Graph Structure

ComfyUI workflows are represented as directed acyclic graphs (DAGs) where:

  • Nodes represent computational units (e.g., model loading, sampling, encoding)
  • Edges represent data flow between nodes
  • Execution Order is determined by topological sorting based on input dependencies
graph LR
    subgraph Inputs
        Model["Model Loader"]
        Clip["CLIP Text Encode"]
        Latent["Empty Latent"]
    end
    
    subgraph Process
        Sampler["KSampler"]
    end
    
    subgraph Outputs
        Decode["VAE Decode"]
        Image["Save Image"]
    end
    
    Model --> Sampler
    Clip --> Sampler
    Latent --> Sampler
    Sampler --> Decode
    Decode --> Image

Release Process Architecture

ComfyUI maintains three interconnected repositories with different release cadences:

RepositoryBranchRelease CyclePurpose
ComfyUI Coremaster~2 weeksMajor stable releases
ComfyUI Coretagsas neededPatch fixes for stable
FrontendvariousweeklyUI updates

Versioning Scheme:

  • Major versions (e.g., v0.7.0) for significant releases
  • Minor versions for master branch releases
  • Patch versions for backported fixes

Sources: README.md

Security Considerations

Multi-User Mode

When --multi-user is enabled:

  • User settings are stored server-side instead of browser local storage
  • Each user has isolated data directories
  • User settings persist across sessions

File Access Control

The /v2/userdata endpoint implements path validation:

  • Prevents directory traversal attacks
  • Validates paths are within user's data directory
  • Returns appropriate HTTP status codes (400, 404) for invalid requests

Sources: app/user_manager.py:80-120

Sources: [README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

Server System

Related topics: System Architecture, Execution Engine

Section Related Pages

Continue reading this section for the full explanation and source context.

Section WebSocket Protocol

Continue reading this section for the full explanation and source context.

Section Message Types

Continue reading this section for the full explanation and source context.

Section Main Server Entry Point

Continue reading this section for the full explanation and source context.

Related topics: System Architecture, Execution Engine

Server System

Overview

The ComfyUI Server System is the core backend infrastructure responsible for handling client connections, executing workflows, managing files, and orchestrating the AI generation pipeline. Built on top of aiohttp, the server provides both REST API endpoints and WebSocket-based real-time communication for seamless interaction between the frontend interface and backend processing engines.

The server acts as the central hub that manages:

  • Client connections via WebSocket protocol
  • Workflow execution scheduling and queue management
  • File operations for models, outputs, and user data
  • Frontend delivery and management
  • User authentication and multi-user support

Sources: server.py | protocol.py

Architecture Overview

graph TB
    subgraph "Client Layer"
        Frontend[Frontend UI]
        ExternalAPI[External API Clients]
    end

    subgraph "Server Core"
        WSS[WebSocket Server]
        REST[REST API Routes]
        Auth[Authentication Layer]
    end

    subgraph "Services Layer"
        Exec[Execution Engine]
        Terminal[Terminal Service]
        FileOps[File Operations]
        Queue[Queue Manager]
    end

    subgraph "Data Layer"
        Models[Model Manager]
        Users[User Manager]
        Settings[App Settings]
    end

    Frontend --> WSS
    ExternalAPI --> REST
    WSS --> Auth
    REST --> Auth
    Auth --> Exec
    Exec --> Queue
    Exec --> Terminal
    FileOps --> Models
    FileOps --> Users
    FileOps --> Settings

Protocol Layer

WebSocket Protocol

The ComfyUI server uses a custom WebSocket-based protocol for real-time communication between the client and server. This protocol enables:

  • Bidirectional messaging - Both client and server can send messages independently
  • Execution events - Real-time updates on workflow execution progress
  • Prompt submission - Sending workflows for execution
  • History tracking - Recording and retrieving execution history

Sources: protocol.py

Message Types

Message TypeDirectionPurpose
executingServer → ClientNotification when a node begins execution
executedServer → ClientNotification when a node completes execution
execution_errorServer → ClientReports errors during workflow execution
progressServer → ClientProgress updates for long-running operations
executing_nodeServer → ClientIdentifies currently executing node
promptClient → ServerSubmit workflow for execution
interruptClient → ServerRequest to interrupt current execution

Server Core Components

Main Server Entry Point

The server.py file contains the main server initialization and lifecycle management. Key responsibilities include:

  • Initializing the aiohttp web application
  • Registering routes and middleware
  • Setting up WebSocket endpoints
  • Managing server lifecycle (start, stop, restart)
# Server initialization pattern
app = web.Application()
server = Server()
server.setup_routes(app)
web.run_app(app, host=host, port=port)

Sources: server.py

API Routes Structure

The server organizes routes into logical namespaces:

Route NamespacePurpose
/apiPublic REST API endpoints
/internalInternal server-to-server communication
/v2/userdataUser data management endpoints
/experimentExperimental features

Internal Routes

Internal routes under /internal are designated for ComfyUI's internal use only and may change without notice. These routes handle:

  • System-level operations
  • Queue management
  • Execution state tracking
  • Server configuration

Sources: api_server/routes/internal/internal_routes.py

Services Layer

Terminal Service

The Terminal Service manages pseudo-terminal functionality for executing external processes. This service is crucial for:

  • Running Python scripts within workflows
  • Executing system commands
  • Managing subprocess lifecycle

The service provides:

  • PTY (pseudo-terminal) allocation
  • Stream multiplexing
  • Process lifecycle management

Sources: api_server/services/terminal_service.py

File Operations

The file operations module provides utilities for:

OperationDescription
Directory listingRecursive and non-recursive file traversal
File metadataSize, creation time, modification time
Path validationSecurity checks for path traversal
User data accessIsolated access to user-specific directories
# File info structure returned by file operations
class FileInfo(TypedDict):
    path: str      # Relative path from base directory
    size: int      # File size in bytes
    modified: int  # Modification timestamp (milliseconds)
    created: int   # Creation timestamp (milliseconds)

The list_userdata_v2 endpoint provides structured access to user data directories with proper security constraints.

Sources: api_server/utils/file_operations.py

Queue Manager

The queue manager handles workflow scheduling:

  • Priority queuing - Higher priority prompts execute first
  • Execution caching - Identical graphs skip re-execution
  • Partial execution - Only changed portions of graphs execute

Execution behavior notes:

  • Only parts of the graph with all correct inputs will be executed
  • Only parts that change between executions are re-run
  • Submitting the same graph twice results in only the first execution

Data Management

User Manager

The UserManager handles multi-user support and user settings:

  • User directory management - Isolated storage per user
  • Settings persistence - Server-side storage instead of browser localStorage
  • Multi-user mode - Enabled via --multi-user CLI flag
SettingDescription
multi_userCLI argument to enable multiple user profiles
user_directoryBase directory for user-specific data
users_fileJSON file storing user configurations

User data is stored in the user directory with each user having isolated access to their own data.

Sources: app/user_manager.py

Model Manager

The ModelFileManager provides:

  • Model discovery - Listing models by type and folder
  • Metadata extraction - Reading safetensors headers for preview images
  • Preview generation - Supporting preview thumbnails for models
FeatureSupported Formats
Preview ImagesPNG, JPG, WebP
Model Metadatasafetensors headers
Preview ThumbnailsBase64-encoded in safetensors metadata

The /experiment/models endpoint provides a structured listing of available model types and folders.

Sources: app/model_manager.py

Frontend Management

Frontend management handles the web UI delivery:

  • Version management - Supports specific versions, nightly builds, or stable releases
  • Custom frontends - Allows loading frontends from external repositories
  • Embedded docs - Integration with embedded documentation package
# Example: Using specific frontend version
--front-end-version Comfy-Org/[email protected]

# Using legacy frontend
--front-end-version Comfy-Org/ComfyUI_legacy_frontend@latest
Frontend ProviderDescription
PyPI (stable)Default stable releases
GitHubCutting-edge daily updates
CustomRepository-specific versions

Sources: app/frontend_management.py

Security Model

User Data Isolation

The server implements strict user data isolation:

  • Each user has a dedicated data directory
  • Path traversal attacks are prevented via glob.escape()
  • User data endpoints validate paths against allowed directories
  • Multi-user mode requires explicit CLI activation

Internal Routes Protection

Routes under /internal are explicitly marked as:

  • Not intended for external application use
  • Subject to change without notice
  • Internal ComfyUI functionality only

Configuration

CLI Arguments

ArgumentDescription
--enable-managerEnable ComfyUI-Manager extension
--enable-manager-legacy-uiUse legacy manager UI
--disable-manager-uiDisable manager UI while keeping background features
--multi-userEnable multiple user profiles
--front-end-versionSpecify frontend version
--preview-methodSet preview generation method (auto, taesd)
--tls-keyfileTLS private key file path
--tls-certfileTLS certificate file path

Environment Variables

VariablePurpose
TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTALEnable experimental ROCm features
PYTORCH_TUNABLEOP_ENABLEDEnable PyTorch tuning for potential speed improvements
HSA_OVERRIDE_GFX_VERSIONOverride AMD GPU architecture detection

Execution Flow

sequenceDiagram
    participant Client
    participant Server
    participant Queue
    participant Executor

    Client->>Server: WebSocket Connect
    Server->>Client: Connection Acknowledged

    Client->>Server: Submit Prompt (workflow)
    Server->>Queue: Add to execution queue
    Server->>Client: Queue position acknowledged

    Loop Execution
        Queue->>Executor: Dequeue next task
        Executor->>Executor: Execute node(s)
        Executor->>Server: Progress updates
        Server->>Client: Real-time execution events

        alt Node executes successfully
            Executor->>Server: Node completed
            Server->>Client: "executed" message
        else Execution error
            Executor->>Server: Error details
            Server->>Client: "execution_error" message
        end
    end

    Executor->>Server: All nodes complete
    Server->>Client: Execution complete

Summary

The ComfyUI Server System provides a robust, event-driven architecture for AI workflow execution. Built on aiohttp, it combines:

  • WebSocket-based real-time communication for interactive execution monitoring
  • RESTful API endpoints for external integration
  • Service-oriented design for modularity and maintainability
  • Strong security boundaries through user isolation and path validation

The server seamlessly integrates with the frontend to deliver a responsive user experience while managing complex AI model execution pipelines in the background.

Sources: [server.py]() | [protocol.py]()

Execution Engine

Related topics: Graph Management, Server System, Memory Management

Section Related Pages

Continue reading this section for the full explanation and source context.

Section Lazy Evaluation Strategy

Continue reading this section for the full explanation and source context.

Section Incremental Execution

Continue reading this section for the full explanation and source context.

Section Input Validation

Continue reading this section for the full explanation and source context.

Related topics: Graph Management, Server System, Memory Management

Execution Engine

The Execution Engine is the core component of ComfyUI responsible for processing node-based workflows. It analyzes the dependency graph, determines execution order, and runs only the nodes necessary to produce the requested outputs.

Overview

ComfyUI uses a directed acyclic graph (DAG) model where each node represents an operation and edges represent data dependencies. The execution engine processes this graph efficiently by:

  • Executing only nodes with all required inputs available
  • Skipping unchanged portions of the graph on re-execution
  • Caching intermediate results to avoid redundant computation

Sources: README.md:1-50

Execution Model

Lazy Evaluation Strategy

The execution engine employs lazy evaluation, meaning nodes are only executed when their outputs are actually needed by other nodes or requested by the user.

graph TD
    A[User Request] --> B{Output Cached?}
    B -->|Yes| C[Return Cached Result]
    B -->|No| D[Find All Dependent Nodes]
    D --> E[Check Input Availability]
    E --> F[Execute Required Nodes]
    F --> G[Cache Results]
    G --> C

Incremental Execution

One of the most powerful features of the execution engine is its ability to perform incremental execution:

  • If the same workflow is submitted twice, only the first execution runs
  • If only part of the graph changes, only that part and its downstream dependencies are re-executed
  • This dramatically improves performance for iterative workflows
"Only parts of the graph that change from each execution to the next will be executed, if you submit the same graph twice only the first will be executed. If you change the last part of the graph only the part you changed and the part that depends on it will be executed."

Sources: README.md:1-50

Node Execution

Input Validation

Before any node executes, the engine validates that all required inputs are present and correctly typed. Nodes that cannot satisfy their input requirements are skipped from execution.

Dependency Resolution

The execution engine uses topological sorting to determine the correct order of node execution, ensuring that all input dependencies are satisfied before a node runs.

Caching System

ComfyUI implements a sophisticated caching mechanism to avoid redundant computation.

Cache Structure

The ModelFileManager class manages caching with the following structure:

self.cache: dict[str, tuple[list[dict], dict[str, float], float]] = {}

Each cache entry contains:

  • A list of dictionaries with file information
  • A dictionary mapping file paths to modification timestamps
  • A float representing cache creation time

Sources: app/model_manager.py:1-50

Cache Operations

OperationMethodDescription
Get Cacheget_cache(key, default)Retrieves cached data by key
Set Cacheset_cache(key, value)Stores data in cache
Clear Cacheclear_cache()Removes all cached entries

Sources: app/model_manager.py:1-50

API Endpoints

The execution engine interacts with the following API endpoints for model and file management:

Model Routes

EndpointMethodPurpose
/experiment/modelsGETList all available model folders
/experiment/models/{folder}GETList all models in a specific folder

File Routes

EndpointMethodPurpose
/filesGETList files in a directory
/v2/userdataGETList user data directory contents

The file listing endpoint supports query parameters:

  • path: Relative path within the data directory
  • recurse: Enable recursive directory traversal
  • full_info: Return detailed file information
  • split: Return path segments as array elements

Sources: app/user_manager.py:1-50 Sources: app/model_manager.py:50-100

Node Type System

ComfyUI uses a typed node system defined in comfy/comfy_types/:

Core Types

TypeDescription
IO.ANYAccepts any input type ("*")
IO.NUMBERNumeric values (FLOAT, INT)
IO.PRIMITIVEBasic types (STRING, FLOAT, INT, BOOLEAN)

Base Class

The ComfyNodeABC abstract base class provides:

  • Type hinting support
  • Autocomplete for node developers
  • Standardized INPUT_TYPES interface

Sources: comfy/comfy_types/README.md:1-50

Workflow Processing

File Operations

Workflows can be loaded from multiple formats:

  • PNG files with embedded workflow data
  • WebP images
  • FLAC audio files
  • JSON workflow files

Dragging a generated PNG onto the webpage automatically extracts the full workflow including seeds.

Sources: README.md:1-50

Dynamic Prompts

The execution engine supports dynamic prompt syntax:

SyntaxDescription
(text:1.2)Increase emphasis (1.2x)
(text:0.8)Decrease emphasis (0.8x)
`{wild\card\test}`Random selection
\\(Escape parentheses
\\{Escape braces

Frontend Integration

The execution engine works with frontend version management to ensure compatibility:

Version String Format

provider/repository@version

Example: Comfy-Org/[email protected]

Version Pattern

^([a-zA-Z0-9][a-zA-Z0-9-]{0,38})/([a-zA-Z0-9_.-]+)@(v?\d+\.\d+\.\d+[-._a-zA-Z0-9]*|latest|prerelease)$

Sources: app/frontend_management.py:1-50

Performance Optimizations

Graph Optimization

The execution engine optimizes performance through:

  1. Dependency Analysis: Identifies minimum required nodes
  2. Caching: Stores intermediate computation results
  3. Incremental Updates: Skips unchanged graph portions
  4. Lazy Evaluation: Only computes when outputs are needed

Parallel Execution

While nodes within the same dependency level may have execution order constraints, the engine is designed to support parallel execution where possible.

Error Handling

The execution engine provides graceful error handling:

  • Invalid paths return appropriate HTTP status codes (400, 404)
  • Missing requirements are logged with installation instructions
  • The system can continue operating even if optional components are unavailable

Sources: app/frontend_management.py:1-50

Sources: [README.md:1-50]()

Graph Management

Related topics: Execution Engine

Section Related Pages

Continue reading this section for the full explanation and source context.

Section Purpose and Scope

Continue reading this section for the full explanation and source context.

Section Data Models

Continue reading this section for the full explanation and source context.

Section Caching Strategy

Continue reading this section for the full explanation and source context.

Related topics: Execution Engine

Graph Management

Overview

Graph Management is a core system in ComfyUI that handles the creation, execution, caching, and manipulation of node-based computational graphs. The system orchestrates how nodes are executed, how workflows are processed, and how subgraphs are managed across the application. ComfyUI's node graph interface enables users to experiment and create complex Stable Diffusion workflows without needing to code, making graph management essential for both the UI layer and the execution engine.

The graph management system encompasses several interconnected components: the execution engine that processes node graphs, subgraph management for reusable workflow components, node replacement for runtime optimizations, and type hinting infrastructure for node development. Only parts of the graph that have an output with all the correct inputs will be executed, and only parts that change from each execution to the next will be re-executed, significantly optimizing performance for iterative workflows.

Core Architecture

graph TD
    A[User Workflow] --> B[Graph Execution Engine]
    B --> C[Node Execution]
    B --> D[Subgraph Manager]
    B --> E[Node Replace Manager]
    C --> F[Graph Utils]
    D --> G[Custom Node Subgraphs]
    D --> H[Blueprint Subgraphs]
    E --> I[Registered Replacements]
    F --> J[Graph Optimization]

Subgraph Management

Purpose and Scope

The Subgraph Manager handles the registration, loading, and lifecycle of reusable workflow components called subgraphs. Subgraphs are self-contained node definitions stored as JSON files that can be imported and used within larger workflows. This system enables code modularity and reuse, allowing custom node developers to package complex node arrangements as single, reusable units.

The manager supports two distinct sources of subgraphs:

SourceDescriptionPath Location
custom_nodeSubgraphs bundled with custom node extensions<custom_node_dir>/subgraphs/<name>.json
templatesBuilt-in workflow templatesblueprints/ directory

Data Models

#### Source Enum

class Source:
    custom_node = "custom_node"
    templates = "templates"

#### SubgraphEntry Structure

FieldTypeDescription
sourcestrSource identifier - custom_node or templates
pathstrRelative path of the subgraph file
namestrName of the subgraph file (without extension)
infoCustomNodeSubgraphEntryInfoAdditional metadata (node pack name for custom nodes)
datastrRaw JSON content of the subgraph

Sources: app/subgraph_manager.py:1-45

#### CustomNodeSubgraphEntryInfo

class CustomNodeSubgraphEntryInfo(TypedDict):
    node_pack: str
    """Node pack name."""

Caching Strategy

The Subgraph Manager implements a caching mechanism to avoid redundant filesystem operations:

class SubgraphManager:
    def __init__(self):
        self.cached_custom_node_subgraphs: dict[SubgraphEntry] | None = None
        self.cached_blueprint_subgraphs: dict[SubgraphEntry] | None = None

The cache is invalidated when force_reload=True is passed to the retrieval methods, enabling refresh during custom node reload scenarios.

Entry Generation

Each subgraph entry is assigned a unique identifier generated via SHA-256 hash:

def _create_entry(self, file: str, source: str, node_pack: str) -> tuple[str, SubgraphEntry]:
    """Create a subgraph entry from a file path. Expects normalized path (forward slashes)."""
    entry_id = hashlib.sha256(f"{source}{file}".encode()).hexdigest()
    entry: SubgraphEntry = {
        "source": source,
        "name": os.path.splitext(os.path.basename(file))[0],
        "path": file,
        ...
    }

Sources: app/subgraph_manager.py:57-70

REST API Endpoints

EndpointMethodDescription
/global_subgraphsGETReturns all subgraphs with optional data stripping
/global_subgraphs/{id}GETReturns a specific subgraph by its SHA-256 ID

The get_all_subgraphs method merges results from both custom nodes and blueprints:

async def get_all_subgraphs(self, loadedModules, force_reload=False):
    """Get all subgraphs from all sources (custom nodes and blueprints)."""
    custom_node_subgraphs = await self.get_custom_node_subgraphs(loadedModules, force_reload)
    blueprint_subgraphs = await self.get_blueprint_subgraphs(force_reload)
    return {**custom_node_subgraphs, **blueprint_subgraphs}

Node Replacement Management

Purpose

The Node Replace Manager registers runtime node substitutions that occur during graph execution. This system enables custom nodes to declare that certain node types should be replaced with alternative implementations, facilitating backward compatibility, optimization, and feature expansion without modifying existing workflows.

Registration Interface

class NodeReplaceManager:
    """Manages node replacement registrations."""

    def __init__(self):
        self._replacements: dict[str, list[NodeReplace]] = {}

    def register(self, node_replace: NodeReplace):
        """Register a node replacement mapping.

        Idempotent: if a replacement with the same (old_node_id, new_node_id)
        is already registered, the duplicate is ignored. This prevents stale
        entries from accumulating when custom nodes are reloaded in the same
        process (e.g. via ComfyUI-Manager).
        """

Sources: app/node_replace_manager.py:25-40

Idempotent Registration

The registration process is designed to be idempotent, preventing duplicate entries when custom nodes are reloaded:

existing = self._replacements.setdefault(node_replace.old_node_id, [])
for entry in existing:
    if entry.new_node_id == node_replace.new_node_id:
        logging.debug(
            "Node replacement %s -> %s already registered, ignoring duplicate.",
            ...
        )

This design prevents stale entries from accumulating during custom node reloads triggered by ComfyUI-Manager.

Node Type System

IO Types

ComfyUI provides a standardized type system through the IO enum for node input/output definitions:

TypeValueDescription
ANY"*"Accepts any type
NUMBER"FLOAT,INT"Numeric values
PRIMITIVE"STRING,FLOAT,INT,BOOLEAN"Basic data types

Sources: comfy/comfy_types/README.md

ComfyNodeABC Base Class

The abstract base class provides type-hinting and autocomplete support for node developers:

class ExampleNode(ComfyNodeABC):
    @classmethod
    def INPUT_TYPES(s) -> InputTypeDict:
        return {"required": {}}

Graph Execution Model

Execution Optimization

ComfyUI's graph execution follows specific rules that optimize performance:

  1. Complete Input Requirement: Only parts of the graph that have an output with all the correct inputs will be executed.
  1. Incremental Execution: Only parts of the graph that change from each execution to the next will be executed. If you submit the same graph twice, only the first will be executed. If you change the last part of the graph, only the part you changed and the part that depends on it will be executed.

This model significantly reduces computational overhead for iterative workflows where users make incremental adjustments.

Workflow Serialization

Workflows can be saved and loaded as JSON files, enabling persistence and sharing of node graph configurations. Dragging a generated PNG on the webpage or loading one will give the full workflow including seeds that were used to create it, maintaining reproducibility.

Node Struct Operations

NodeStruct Definition

class NodeStruct(TypedDict):
    inputs: dict[str, str | int | float | bool | tuple[str, int]]
    class_type: str
    _meta: dict[str, str]

Copy Operations

The copy_node_struct function creates modified copies for graph manipulation:

def copy_node_struct(node_struct: NodeStruct, empty_inputs: bool = False) -> NodeStruct:
    new_node_struct = node_struct.copy()
    if empty_inputs:
        new_node_struct["inputs"] = {}
    else:
        new_node_struct["inputs"] = node_struct["inputs"].copy()
    new_node_struct["_meta"] = node_struct["_meta"].copy()
    return new_node_struct

Sources: app/node_replace_manager.py:16-25

ComponentFile PathPurpose
Graph Executioncomfy_execution/graph.pyCore graph execution engine
Graph Utilitiescomfy_execution/graph_utils.pyGraph manipulation helpers
Node Helpersnode_helpers.pyCommon node development utilities
Node Typingcomfy/comfy_types/node_typing.pyType definitions for nodes
User Managerapp/user_manager.pyUser data and file operations

Best Practices

Node Development

  • Use ComfyNodeABC as the base class for custom nodes to leverage type-hinting
  • Properly define INPUT_TYPES with correct type annotations
  • Register node replacements idempotently to support hot-reloading

Workflow Optimization

  • Structure workflows to minimize dependencies between unchanged sections
  • Use subgraphs for reusable workflow patterns
  • Leverage the incremental execution model by making changes at graph endpoints

Custom Node Packaging

  • Place subgraphs in the designated subgraphs/ directory within custom node packages
  • Use the node pack name in CustomNodeSubgraphEntryInfo for proper namespacing
  • Follow JSON format for subgraph definition files

Sources: [app/subgraph_manager.py:1-45]()

Model Loading and Detection

Related topics: Diffusion Models, Memory Management

Section Related Pages

Continue reading this section for the full explanation and source context.

Related topics: Diffusion Models, Memory Management

I do not have access to the source files required to generate this wiki page. The provided context does not contain the following files that are essential for documenting the Model Loading and Detection system:

Required files that are missing:

The context only includes:

Without access to the actual source files for model loading and detection, I cannot provide accurate technical details, code citations, or architectural diagrams that would be factual and useful.

To generate this wiki page, I would need the repository to be re-analyzed with the specific files listed above included in the context.

Source: https://github.com/Comfy-Org/ComfyUI / Human Manual

Diffusion Models

Related topics: Model Loading and Detection, Text Processing and Encoders

Section Related Pages

Continue reading this section for the full explanation and source context.

Section Core Diffusion Module Structure

Continue reading this section for the full explanation and source context.

Section Supported Model Types

Continue reading this section for the full explanation and source context.

Section Base Diffusion Model Files

Continue reading this section for the full explanation and source context.

Related topics: Model Loading and Detection, Text Processing and Encoders

Diffusion Models

Overview

Diffusion models in ComfyUI are probabilistic generative models that learn to reverse a forward diffusion process. By gradually denoising random noise through a learned reverse process, these models generate high-quality images, videos, and audio from latent representations.

ComfyUI implements a modular architecture supporting multiple diffusion model families:

Model FamilyDomainPrimary File
Stable DiffusionImagecomfy/ldm/modules/diffusionmodules/model.py
Stable Diffusion XLImagecomfy/ldm/modules/diffusionmodules/openaimodel.py
FluxImagecomfy/ldm/flux/model.py
WanVideocomfy/ldm/wan/model.py
Hunyuan VideoVideocomfy/ldm/hunyuan_video/model.py
CogVideoVideocomfy/ldm/cogvideo/model.py

Architecture

Core Diffusion Module Structure

graph TD
    A[Latent Input] --> B[Diffusion Model]
    B --> C[UNet Architecture]
    C --> D[Time Embedding]
    C --> E[Residual Blocks]
    C --> F[Attention Layers]
    D --> G[Denoised Output]
    E --> G
    F --> G
    
    H[Sampler] --> I[Noise Schedule]
    I --> B
    G --> J[VAE Decode]
    J --> K[Final Output]

Supported Model Types

ComfyUI natively supports state-of-the-art open-source diffusion models across multiple domains:

#### Image Generation Models

Model TypeDescriptionDocumentation Link
Stable Diffusion 1.5Latent diffusion model for image generationExamples
Stable Diffusion XLEnhanced SD with improved qualityIncluded in core
SDXL Turbo / LCMFast convergence modelsLCM Examples
Stable Diffusion 3 / FluxMM-DiT architecture for superior qualityFlux Examples
Hunyuan DiTTencent's diffusion transformerIncluded in core
OllinCustom high-quality diffusionAvailable via community
WanWan 2.1 and Wan 2.2 video modelsWan Examples
HiDreamAdvanced image generationHiDream Examples

Sources: README.md

#### Video Generation Models

Model TypeDescriptionDocumentation Link
Stable Video DiffusionFrame interpolation and video generationVideo Examples
MochiHigh-quality video synthesisMochi Examples
LTX-VideoLightweight video diffusionLTX Examples
Hunyuan VideoTencent's video generationHunyuan Examples
Wan 2.1/2.2Comprehensive video modelsWan Examples

Sources: README.md

#### Audio Models

Model TypeDescription
Stable AudioAudio generation and synthesis

Sources: README.md

#### Image Editing Models

Model TypeDescriptionLink
Omnigen 2Unified image editingExamples
Flux KontextIn-context image editingExamples
HiDream E1.1Advanced editing capabilitiesExamples
Qwen Image EditMulti-modal editingExamples

Sources: README.md

Model Loading Architecture

Base Diffusion Model Files

FilePurpose
comfy/ldm/modules/diffusionmodules/model.pyCore SD1.5/SD2.x diffusion model implementation
comfy/ldm/modules/diffusionmodules/openaimodel.pySDXL and newer architecture variants
comfy/ldm/flux/model.pyFlux/MM-DiT architecture implementation
comfy/ldm/wan/model.pyWan video diffusion model
comfy/ldm/hunyuan_video/model.pyHunyuan video diffusion
comfy/ldm/cogvideo/model.pyCogVideo model implementation

Model Loading Workflow

graph LR
    A[Model Checkpoint] --> B[Model Loader Node]
    B --> C[Load State Dict]
    C --> D[Architecture Detection]
    D --> E{Router}
    E -->|SD 1.5/2.x| F[diffusionmodules/model.py]
    E -->|SDXL| G[diffusionmodules/openaimodel.py]
    E -->|Flux| H[flux/model.py]
    E -->|Video| I[wan/hunyuan/cogvideo/model.py]

Sampling System

Sampler Implementation

The sampling system is implemented in comfy/samplers.py and comfy/sample.py.

ComponentFileFunction
SamplerFactorycomfy/samplers.pyCreates sampler instances
KSamplercomfy/samplers.pyMain sampling loop implementation
CFGGuidercomfy/samplers.pyClassifier-free guidance implementation
Samplercomfy/sample.pyOrchestrates the sampling process

Sampling Parameters

ParameterTypeDescription
stepsintNumber of denoising steps
cfgfloatClassifier-free guidance scale
sampler_namestrSampler algorithm (e.g., euler, dpmpp_2m)
schedulerstrNoise schedule type
denoisefloatDenoising strength (0.0-1.0)

Available Samplers

ComfyUI supports multiple sampling algorithms:

Sampler CategoryAlgorithms
Euler Familyeuler, euler_ancestral, euler_a
DPM++dpmpp_2m, dpmpp_2m_karras, dpmpp_sde, dpmpp_sde_karras
DDIMddim
UniPCunipc
LCMlcm (for LCM/SDXL-Turbo models)

Noise Schedules

SchedulerDescription
normalStandard noise schedule
karrasOptimized schedule for better quality
exponentialExponential decay schedule
simpleSimplified schedule

Advanced Features

Textual Inversion

ComfyUI supports textual inversion embeddings for style and concept customization.

Sources: README.md

LoRA Support

LoRA TypeDescription
Regular LoRAStandard low-rank adaptation
LoConLocation-aware conditioning
LoHaLow-rank Hadamard product adaptation

Sources: README.md

Hypernetworks

Custom hypernetworks can be loaded and applied to modify model behavior.

Sources: README.md

ControlNet and T2I-Adapter

Structural guidance for diffusion models through:

TypeDescription
ControlNetConditioning via additional neural networks
T2I-AdapterLightweight adapters for structure guidance

Sources: README.md

Workflow Composition

Node Graph Architecture

graph TD
    A[Load Checkpoint] --> B[CLIP Text Encode]
    B --> C[KSampler]
    A --> D[VAE Encode]
    D --> C
    C --> E[VAE Decode]
    E --> F[Save Image]
    
    G[Positive Prompt] --> B
    H[Negative Prompt] --> B

Example Workflows

WorkflowPurposeLink
txt2imgText-to-image generationExamples
img2imgImage-to-image transformationIncluded in core
Hires FixTwo-pass upscalingHires Fix
InpaintingSelective regenerationInpaint
Area CompositionMulti-region compositionArea Composition
UpscaleSuper-resolutionUpscale Models
Model MergingCombine model weightsModel Merging
GLIGENGrounded generationGLIGEN

Sources: README.md

Performance Optimization

Latent Preview with TAESD

ComfyUI provides real-time preview capabilities using TAESD (Tiny AutoEncoder for Stable Diffusion):

FeatureDescription
Low-res PreviewDefault fast latent preview
TAESD PreviewHigh-quality previews
--preview-methodCLI flag to select preview method

To enable TAESD previews:

  1. Download decoder files from taesd repository:
  1. Place files in models/vae_approx directory
  1. Launch with --preview-method taesd

Sources: README.md

GPU Support

PlatformInstallation Command
NVIDIA (CUDA 12.1)pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu121
NVIDIA (CUDA 12.4)pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu124
NVIDIA (CUDA 12.6)pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu126
AMD (ROCm)pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.1
Intel (XPU)pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/xpu
Apple SiliconInstall PyTorch nightly per Apple Developer Guide

Sources: README.md

Memory Efficient Attention

For AMD GPUs with ROCm, experimental memory efficient attention can be enabled:

TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL=1 python main.py --use-pytorch-cross-attention

For potential speed improvements:

PYTORCH_TUNABLEOP_ENABLED=1 python main.py

Sources: README.md

Execution Model

Partial Graph Execution

ComfyUI's execution engine optimizes diffusion model runs:

Only parts of the graph that have an output with all the correct inputs will be executed.
Only parts of the graph that change from each execution to the next will be executed. If you submit the same graph twice, only the first will be executed. If you change the last part of the graph, only the part you changed and the part that depends on it will be re-executed.

Sources: README.md

Execution Flow

graph TD
    A[Submit Workflow] --> B[Analyze Dependencies]
    B --> C[Identify Executable Nodes]
    C --> D[Execute Required Nodes]
    D --> E[Cache Results]
    E --> F[Return Outputs]
    
    G[Submit Same Workflow] --> H{Cached?}
    H -->|Yes| I[Skip Execution]
    H -->|No| J[Execute Changed Nodes]
    I --> F
    J --> K[Update Cache]
    K --> F

API Integration

API Nodes

ComfyUI includes optional API nodes for accessing paid models from external providers through the official Comfy API.

To disable API nodes:

python main.py --disable-api-nodes

Sources: README.md

Offline Operation

ComfyUI works fully offline for core functionality:

Works fully offline: core will never download anything unless you want it to.

Sources: README.md

Release and Versioning

ComfyUI follows a structured release cycle:

Release TypeFrequencyDescription
Major Stable~Every 2 weeksNew stable versions (e.g., v0.7.0)
PatchAs neededBackported fixes for stable releases
NightlyDailyCutting-edge updates from master branch
Commits outside of the stable release tags may be very unstable and break many custom nodes.

Sources: README.md

See Also

Sources: [README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

Text Processing and Encoders

Related topics: Diffusion Models

Section Related Pages

Continue reading this section for the full explanation and source context.

Section SD1 CLIP

Continue reading this section for the full explanation and source context.

Section SDXL CLIP

Continue reading this section for the full explanation and source context.

Section Flux Encoder

Continue reading this section for the full explanation and source context.

Related topics: Diffusion Models

Text Processing and Encoders

Overview

Text processing and encoding in ComfyUI provides the mechanism to convert human-readable text prompts into numerical representations (embeddings) that can be consumed by diffusion models. This system supports various model architectures including SD1.x, SDXL, Flux, and modern multimodal models.

Sources: README.md

Architecture

graph TD
    A[User Text Prompt] --> B[Text Encoding Nodes]
    B --> C[CLIPTextEncode]
    B --> D[CLIP Text Encode Hires]
    B --> E[Model-Specific Encoders]
    C --> F[CLIP Models]
    E --> G[Flux Encoder]
    E --> H[T5 Encoder]
    E --> I[Llama Encoder]
    F --> J[Embedding Tensors]
    G --> J
    H --> J
    I --> J
    J --> K[Diffusion Model]

CLIP Models

The comfy/clip_model.py module provides the foundational CLIP model implementation used across different model variants.

Sources: comfy/clip_model.py

SD1 CLIP

The SD1 CLIP implementation (comfy/sd1_clip.py) handles text encoding for Stable Diffusion 1.x models.

Sources: comfy/sd1_clip.py

SDXL CLIP

The SDXL CLIP implementation (comfy/sdxl_clip.py) extends text encoding capabilities for SDXL models with additional prompt handling.

Sources: comfy/sdxl_clip.py

Text Encoders Module

The comfy/text_encoders/ directory contains specialized encoders for modern model architectures.

Sources: comfy/text_encoders/flux.py, comfy/text_encoders/t5.py, comfy/text_encoders/llama.py

Flux Encoder

Handles text encoding for Flux models, typically combining CLIP and T5 encodings.

T5 Encoder

Implements T5-based text encoding for models requiring transformer-based text processing.

Llama Encoder

Provides Llama-based text encoding for advanced text understanding capabilities.

Embeddings System

ComfyUI supports custom embeddings stored in the models/embeddings directory.

Sources: README.md

Using Custom Embeddings

Embeddings can be referenced in the CLIPTextEncode node using the following syntax:

embedding:embedding_filename.pt

The .pt extension can be omitted when specifying embeddings.

Model Integration

Text encoders are integrated into the broader model system through comfy/sd.py, which coordinates between different model components and their respective encoders.

Sources: comfy/sd.py

Supported Models

Model FamilyText Encoder(s)Notes
SD 1.xCLIPStandard text encoding
SDXLCLIPDual CLIP support
FluxCLIP + T5Combined encoding approach
HunyuanDiTCustomModel-specific implementation

Text Encoding Workflow

graph LR
    A1[Positive Prompt] --> B[CLIPTextEncode]
    A2[Negative Prompt] --> C[CLIPTextEncode]
    B --> D[Positive Embeddings]
    C --> E[Negative Embeddings]
    D --> F[KSampler]
    E --> F
    F --> G[Image Generation]

Node Types

CLIPTextEncode

The primary node for encoding text prompts into embeddings.

Input Parameters:

  • text: The text prompt to encode
  • clip: The CLIP model to use for encoding

Output:

  • CONDITIONING: The encoded text representation

Specialized Encoding Nodes

NodePurposeUse Case
CLIP Text Encode HiresHigh-resolution aware encodingMulti-pass workflows
Model-Specific EncodeArchitecture-specific handlingFlux, SDXL, etc.

Best Practices

  1. Prompt Formatting: Use proper syntax for weight adjustments (e.g., (text:1.2))
  2. Embedding Loading: Place custom embeddings in models/embeddings
  3. Model Matching: Ensure text encoder matches the generation model
  4. Batch Processing: Consider CLIP sequence length limitations
  • Model Management: app/model_manager.py handles loading and caching of text encoder models
  • Type System: comfy/comfy_types/ provides type hints for node development including IO types for text processing

Sources: [README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

Memory Management

Related topics: Model Loading and Detection, Execution Engine

Section Related Pages

Continue reading this section for the full explanation and source context.

Section Smart Offloading

Continue reading this section for the full explanation and source context.

Section Low VRAM Support

Continue reading this section for the full explanation and source context.

Section Execution Optimization

Continue reading this section for the full explanation and source context.

Related topics: Model Loading and Detection, Execution Engine

Memory Management

ComfyUI implements a sophisticated smart memory management system that enables efficient execution of large AI models on hardware with limited VRAM. This system is fundamental to ComfyUI's ability to run complex workflows on consumer-grade GPUs.

Overview

The memory management subsystem in ComfyUI handles the lifecycle of model data in GPU and system memory. Its primary objectives include:

  • Automatic model offloading: Dynamically moving models between GPU VRAM and system RAM
  • VRAM optimization: Enabling execution on GPUs with as little as 1GB of VRAM
  • Execution caching: Storing partial execution results to avoid redundant computation
  • Memory cleanup: Properly releasing resources when models are no longer needed

Sources: README.md

Architecture Overview

graph TD
    A[Workflow Execution] --> B[Memory Manager]
    B --> C{VRAM Available?}
    C -->|Yes| D[Load Model to GPU]
    C -->|No| E[Smart Offloading]
    E --> F[Partial GPU Loading]
    F --> G[System RAM Swap]
    D --> H[Execute Nodes]
    G --> H
    H --> I[Cache Results]
    I --> J[Memory Cleanup]
    J --> K[Free VRAM]

Key Memory Management Features

Smart Offloading

ComfyUI can automatically run large models on GPUs with limited VRAM through intelligent offloading strategies. When a model exceeds available VRAM, the system selectively keeps portions of the model in GPU memory while swapping other components to system RAM.

Low VRAM Support

ComfyUI supports execution on GPUs with as low as 1GB VRAM. This is achieved through:

VRAM LevelStrategy
1GB+Full offloading with sequential layer execution
4GB+Partial offloading with larger batch sizes
8GB+Minimal offloading, models stay loaded
16GB+Multiple models can stay in memory simultaneously

Execution Optimization

The system implements intelligent execution optimization where:

  1. Only changed graph segments execute - If you submit the same graph twice, only the first execution runs
  2. Dependency tracking - Only parts of the graph that depend on changed nodes are re-executed
  3. Partial graph execution - Only graph segments with all correct inputs are executed

Sources: README.md

Model Loading Strategies

ComfyUI supports multiple model formats and loading strategies:

Supported Model Formats

FormatDescriptionSafety
.safetensorsSafe tensor format, recommended✅ Safe
.ckptCheckpoint files⚠️ Standard
.pt / .pthPyTorch state dicts⚠️ Legacy

Memory-Efficient Loading

The system implements safe loading for all model formats, preventing arbitrary code execution from malicious model files.

GPU Memory Options

Command Line Options

ComfyUI provides several command-line options for memory management:

# CPU-only execution (slowest, works without GPU)
python main.py --cpu

# Force specific GPU device
python main.py --device cuda:0

Preview Method Configuration

For latent preview generation, ComfyUI supports different preview methods that vary in memory usage:

MethodQualityMemory UsageDescription
autoLowMinimalDefault fast latent preview
taesdHighLowTAESD decoder for high-quality previews

To enable high-quality previews:

# Download TAESD decoder files to models/vae_approx/
# Then launch with:
python main.py --preview-method taesd

Sources: README.md

Memory Management Classes

Based on the module structure, the memory management system consists of several key components:

classDiagram
    class MemoryManager {
        +manage_vram()
        +offload_model()
        +load_model()
    }
    class ModelManager {
        +register_model()
        +get_model()
        +unload_model()
    }
    class PinnedMemory {
        +allocate_pinned()
        +transfer_to_device()
        +free_pinned()
    }
    class PixelSpaceConverter {
        +to_latent()
        +to_pixel()
        +convert_tensor()
    }

Module Responsibilities

ModulePurpose
memory_management.pyCore VRAM management and model placement logic
model_management.pyModel lifecycle, registration, and caching
pinned_memory.pyPinned memory allocation for efficient CPU-GPU transfers
pixel_space_convert.pyConversion between pixel and latent image spaces

Execution Flow with Memory Management

sequenceDiagram
    participant User
    participant Workflow
    participant MemoryManager
    participant ModelCache
    participant GPU
    participant SystemRAM

    User->>Workflow: Submit Workflow
    Workflow->>MemoryManager: Request Model
    MemoryManager->>ModelCache: Check Cache
    alt Model in Cache
        ModelCache-->>MemoryManager: Return Model Ref
    else Model Not Cached
        MemoryManager->>GPU: Check VRAM
        alt Sufficient VRAM
            GPU-->>MemoryManager: OK
            MemoryManager->>GPU: Load Model
        else Insufficient VRAM
            MemoryManager->>SystemRAM: Offload Parts
            MemoryManager->>GPU: Load Partial Model
        end
    end
    MemoryManager-->>Workflow: Model Ready
    Workflow->>GPU: Execute Nodes
    GPU-->>Workflow: Results

Best Practices

  1. Close unused workflows - Free memory for new models
  2. Use .safetensors format - Safer and often faster loading
  3. Batch similar operations - Reduces model loading/unloading cycles
  4. Monitor VRAM usage - Use system tools to track memory consumption

Configuration Files

ComfyUI supports model path configuration through extra_model_paths.yaml:

# Example extra_model_paths.yaml
models_dir: /path/to/models
checkpoints:
  - /custom/checkpoint/path

This allows sharing model directories with other Stable Diffusion installations, reducing duplicate storage.

Sources: [README.md]()

Doramagic Pitfall Log

Source-linked risks stay visible on the manual page so the preview does not read like a recommendation.

medium Project risk needs validation

The project should not be treated as fully validated until this signal is reviewed.

medium README/documentation is current enough for a first validation pass.

The project should not be treated as fully validated until this signal is reviewed.

medium Maintainer activity is unknown

Users cannot judge support quality until recent activity, releases, and issue response are checked.

medium no_demo

The project may affect permissions, credentials, data exposure, or host boundaries.

Doramagic Pitfall Log

Doramagic extracted 7 source-linked risk signals. Review them before installing or handing real data to the project.

1. Project risk: Project risk needs validation

  • Severity: medium
  • Finding: Project risk is backed by a source signal: Project risk needs validation. Treat it as a review item until the current version is checked.
  • User impact: The project should not be treated as fully validated until this signal is reviewed.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: identity.distribution | github_repo:589831718 | https://github.com/Comfy-Org/ComfyUI | repo=comfyui; install=comfy-cli

2. Capability assumption: README/documentation is current enough for a first validation pass.

  • Severity: medium
  • Finding: README/documentation is current enough for a first validation pass.
  • User impact: The project should not be treated as fully validated until this signal is reviewed.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: capability.assumptions | github_repo:589831718 | https://github.com/Comfy-Org/ComfyUI | README/documentation is current enough for a first validation pass.

3. Maintenance risk: Maintainer activity is unknown

  • Severity: medium
  • Finding: Maintenance risk is backed by a source signal: Maintainer activity is unknown. Treat it as a review item until the current version is checked.
  • User impact: Users cannot judge support quality until recent activity, releases, and issue response are checked.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: evidence.maintainer_signals | github_repo:589831718 | https://github.com/Comfy-Org/ComfyUI | last_activity_observed missing

4. Security or permission risk: no_demo

  • Severity: medium
  • Finding: no_demo
  • User impact: The project may affect permissions, credentials, data exposure, or host boundaries.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: downstream_validation.risk_items | github_repo:589831718 | https://github.com/Comfy-Org/ComfyUI | no_demo; severity=medium

5. Security or permission risk: no_demo

  • Severity: medium
  • Finding: no_demo
  • User impact: The project may affect permissions, credentials, data exposure, or host boundaries.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: risks.scoring_risks | github_repo:589831718 | https://github.com/Comfy-Org/ComfyUI | no_demo; severity=medium

6. Maintenance risk: issue_or_pr_quality=unknown

  • Severity: low
  • Finding: issue_or_pr_quality=unknown。
  • User impact: Users cannot judge support quality until recent activity, releases, and issue response are checked.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: evidence.maintainer_signals | github_repo:589831718 | https://github.com/Comfy-Org/ComfyUI | issue_or_pr_quality=unknown

7. Maintenance risk: release_recency=unknown

  • Severity: low
  • Finding: release_recency=unknown。
  • User impact: Users cannot judge support quality until recent activity, releases, and issue response are checked.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: evidence.maintainer_signals | github_repo:589831718 | https://github.com/Comfy-Org/ComfyUI | release_recency=unknown

Source: Doramagic discovery, validation, and Project Pack records

Community Discussion Evidence

These external discussion links are review inputs, not standalone proof that the project is production-ready.

Sources 12

Count of project-level external discussion links exposed on this manual page.

Use Review before install

Open the linked issues or discussions before treating the pack as ready for your environment.

Community Discussion Evidence

Doramagic exposes project-level community discussion separately from official documentation. Review these links before using ComfyUI with real data or production workflows.

Source: Project Pack community evidence and pitfall evidence