Doramagic Project Pack · Human Manual
ComfyUI
ComfyUI is a powerful, modular AI creation engine designed for visual professionals who demand precise control over every model, parameter, and output. It provides a node graph-based inter...
Introduction to ComfyUI
Related topics: Installation Guide, System Architecture
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Installation Guide, System Architecture
Introduction to ComfyUI
Overview
ComfyUI is a powerful, modular AI creation engine designed for visual professionals who demand precise control over every model, parameter, and output. It provides a node graph-based interface that enables users to generate images, videos, 3D models, audio, and other AI-driven content with granular control over the entire generation pipeline.
Sources: README.md
Key Characteristics
| Characteristic | Description |
|---|---|
| Type | AI Generation Engine |
| Interface | Node Graph / Visual Programming |
| License | Open Source |
| Platforms | Windows, Linux, macOS, Cloud |
| GPU Support | NVIDIA, AMD (ROCm), Intel, Apple Silicon, Ascend, Iluvatar |
ComfyUI natively supports the latest open-source state-of-the-art models and provides API nodes for accessing closed-source models such as Seedance, Hunyuan3D, and others through the online Comfy API.
Sources: README.md
Core Features
Model Support
ComfyUI provides extensive support for various AI model types:
| Model Category | Examples | Documentation Link |
|---|---|---|
| Stable Diffusion | SD 1.x, SD 2.x, SDXL, SD 3.x | Examples |
| ControlNet/T2I-Adapter | Various preprocessors | ControlNet Guide |
| LoRA/LyCORIS | Regular, locon, loha variants | LoRA Guide |
| Upscaling Models | ESRGAN, SwinIR, Swin2SR | Upscale Guide |
| Latent Models | LCM models and Loras | LCM Guide |
Sources: README.md
Advanced Capabilities
- Textual Inversion & Hypernetworks: Advanced embedding techniques for custom styling
- Area Composition: Multi-region generation with precise control
- Inpainting: Both regular and inpainting-specific models supported
- Model Merging: Combine multiple models for unique outputs
- Latent Previews: Real-time preview with TAESD for high-quality previews
- Workflow Export: Save/load workflows as JSON, embed in PNG/WebP/FLAC metadata
- Offline Operation: Core functionality works completely offline
Sources: README.md
System Architecture
High-Level Architecture
graph TD
subgraph "Frontend Layer"
UI[User Interface]
WS[WebSocket Handler]
end
subgraph "API Layer"
REST[REST API Routes]
INT[Internal Routes]
end
subgraph "Core Execution Engine"
SG[Scheduling Graph]
EX[Execution Engine]
NODE[Node Registry]
end
subgraph "Model Management"
MM[Model Manager]
LM[Loader Manager]
end
subgraph "Backend Services"
UM[User Manager]
FM[Frontend Manager]
end
UI <--> WS
WS <--> REST
REST <--> INT
REST <--> SG
SG <--> EX
EX <--> NODE
MM <--> LM
UM <--> FM
style UI fill:#e1f5fe
style EX fill:#fff3e0
style MM fill:#e8f5e9Node Type System
ComfyUI uses a typed node system for type-safe workflow construction. The comfy_types module provides abstract base classes and type hints:
classDiagram
class ComfyNodeABC {
<<abstract>>
+INPUT_TYPES() InputTypeDict
+FUNCTION() str
+OUTPUT_NODE() bool
+CATEGORY() str
+RETURN_TYPES() tuple
}
class CheckLazyMixin {
<<mixin>>
}
class IO {
<<enum>>
+ANY: "*"
+NUMBER: "FLOAT,INT"
+PRIMITIVE: "STRING,FLOAT,INT,BOOLEAN"
}
ComfyNodeABC <-- CheckLazyMixin
ComfyNodeABC ..> IO : usesSources: comfy/comfy_types/README.md
Execution Model
ComfyUI employs a smart execution model that optimizes workflow processing:
graph LR
A[Submit Workflow] --> B{Changed?}
B -->|First Run| C[Execute All Valid Paths]
B -->|Unchanged| D[Skip Execution]
B -->|Partial Change| E[Execute Changed + Dependencies]
C --> F[Output Results]
E --> F
D --> FExecution Rules:
- Only parts of the graph with all correct inputs will be executed
- Only parts that change between executions are re-run
- Submitting the same graph twice executes only the first instance
Sources: README.md
Installation
Supported Platforms
| Platform | GPU Options | Installation Type |
|---|---|---|
| Windows | NVIDIA, AMD, Intel, CPU | Portable Package, Manual Install |
| Linux | NVIDIA, AMD (ROCm), Intel, CPU | Manual Install |
| macOS | Apple Silicon (M1/M2), CPU | Manual Install |
| Cloud | Comfy Cloud | Desktop Application |
Sources: README.md
Quick Start Commands
# Windows/Linux Manual Installation
pip install -r requirements.txt
python main.py
# NVIDIA GPU (Stable)
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu130
# NVIDIA GPU (Nightly)
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu132
# AMD GPU (ROCm)
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm6.1
# Apple Silicon
# Install PyTorch nightly per Apple Developer Guide
pip install -r requirements.txt
ComfyUI-Manager Setup
ComfyUI-Manager provides extension management capabilities:
# Install dependencies
pip install -r manager_requirements.txt
# Enable with flags
python main.py --enable-manager
| Manager Flag | Description |
|---|---|
--enable-manager | Enable ComfyUI-Manager |
--enable-manager-legacy-ui | Use legacy manager UI |
--disable-manager-ui | Keep background features only |
Sources: README.md
User Interface
Keyboard Shortcuts
| Shortcut | Action |
|---|---|
Ctrl+Z / Ctrl+Y | Undo/Redo |
Ctrl+S | Save workflow |
Ctrl+O | Load workflow |
Ctrl+A | Select all nodes |
Alt+C | Collapse/uncollapse selected |
Ctrl+M | Mute/unmute selected |
Ctrl+B | Bypass selected (reconnect wires) |
Delete/Backspace | Delete selected nodes |
Space + Drag | Pan canvas |
Ctrl+Click / Shift+Click | Add to selection |
Ctrl+C / Ctrl+V | Copy/paste nodes |
Ctrl+Shift+V | Paste with connections |
Shift+Drag | Move multiple nodes |
Ctrl+D | Load default graph |
Alt++ / Alt+- | Zoom in/out |
P | Pin/unpin nodes |
Ctrl+G | Group selected |
Double-Click | Open node search palette |
Sources: README.md
Preview Methods
ComfyUI supports multiple preview rendering methods:
| Method | Quality | Performance | Setup |
|---|---|---|---|
auto | Variable | Variable | Default |
taesd | High | Fast | Download TAESD decoder models |
To enable high-quality previews with TAESD:
- Download decoder files to
models/vae_approxfolder:
``bash python main.py --preview-method taesd ``
- Launch with preview flag:
Sources: README.md
API and Integration
API Structure
ComfyUI provides a comprehensive REST API for external integrations:
graph TD
EXT[External Application] -->|HTTP/REST| API[API Server]
API -->|v2/userdata| UM[User Data Management]
API -->|v2/modelinfo| MM[Model Info]
API -->|v2/history| H[Execution History]
EXT -->|WebSocket| WS[WebSocket Connection]
WS -->|Real-time| STATUS[Execution Status]Internal Routes
All routes under /internal are designated for internal ComfyUI use only. These routes may change at any time without notice and are not intended for external application use.
Sources: api_server/routes/internal/README.md
User Data API
The user data management system provides secure file operations:
| Endpoint | Method | Description |
|---|---|---|
/v2/userdata | GET | List directory contents |
/v2/userdata/{path} | POST | Upload file |
/v2/userdata/{file} | DELETE | Delete file |
/v2/userdata/{file}/move/{dest} | POST | Move/rename file |
Query Parameters for Listing:
path: Relative path within user's data directoryrecurse: Enable recursive directory listingfull_info: Return detailed file informationsplit: Return path as array split by/
Sources: app/user_manager.py
Model Discovery
The model manager provides intelligent model discovery with metadata extraction:
graph TD
A[Model Path] --> B{Extension Check}
B -->|.safetensors| C[Extract Metadata]
B -->|.preview| D[Add Preview Image]
B -->|Other| E[Standard Add]
C --> F[Parse ssmd_cover_images]
D --> R[Result List]
E --> R
F --> RThe system extracts preview images embedded in SafeTensors metadata under the ssmd_cover_images key.
Sources: app/model_manager.py
Frontend Management
Version Control
ComfyUI supports flexible frontend version management:
graph LR
A[Default Frontend] --> B[Specific Version]
A --> C[Latest/Daily]
A --> D[Legacy Frontend]
B -.->|v1.2.2| E[Stable]
C -.->|daily| F[Cutting Edge]
D -.->|legacy| G[Compatibility]| Version String | Description |
|---|---|
Comfy-Org/[email protected] | Specific stable version |
Comfy-Org/ComfyUI_frontend@latest | Latest release |
Comfy-Org/ComfyUI_frontend@prerelease | Pre-release build |
Version Pattern:
^([a-zA-Z0-9][a-zA-Z0-9-]{0,38})/([a-zA-Z0-9_.-]+)@(v?\d+\.\d+\.\d+[-._a-zA-Z0-9]*|latest|prerelease)$
Sources: app/frontend_management.py
Custom Frontends
Frontends are stored in a configurable directory structure:
CUSTOM_FRONTENDS_ROOT/
├── Comfy-Org_ComfyUI_frontend/
│ ├── v1.2.2/
│ ├── v1.3.0/
│ └── latest/
└── custom_provider_custom_frontend/
└── v2.0.0/
The system supports embedding custom documentation and workflow templates through separate pip packages (comfyui-embedded-docs, comfyui-workflow-templates).
Sources: app/frontend_management.py
Security Features
TLS/SSL Support
ComfyUI supports HTTPS for secure connections:
# Generate self-signed certificate
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem \
-sha256 -days 3650 -nodes \
-subj "/C=XX/ST=StateName/L=CityName/O=CompanyName/OU=CompanySectionName/CN=CommonNameOrHostname"
# Launch with TLS
python main.py --tls-keyfile key.pem --tls-certfile cert.pem
Note: Self-signed certificates are not appropriate for shared or production environments.
Sources: README.md
Manager Security
The --disable-manager-ui flag allows keeping security checks and scheduled installation completion while disabling the manager UI and endpoints.
Sources: README.md
Release Process
ComfyUI follows a structured release cycle:
graph TD
A[Commit to Repository] --> B{Which Branch?}
B -->|Master| C[Weekly Release Candidate]
B -->|Stable Tag| D[Backport Fixes]
C --> E[Major Version v0.X.Y]
D --> F[Patch Version v0.4.X]
E -.->|~2 weeks| G[Next Major]
F -.->|as needed| H[Stable Update]| Release Type | Frequency | Target |
|---|---|---|
| Major Version | ~2 weeks | Monday (variable) |
| Patch Version | As needed | Stable branch backports |
| Nightly Commits | Ongoing | Master branch (unstable) |
Warning: Commits outside stable release tags may be very unstable and break custom nodes.
Sources: README.md
See Also
- Examples Page - Workflow examples
- ComfyUI-Manager - Custom node management
- Comfy Cloud - Official cloud hosting
- Comfy API Documentation - API nodes guide
- GPU Recommendations - Hardware guide
Sources: [README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)
Installation Guide
Related topics: Introduction to ComfyUI
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Introduction to ComfyUI
Installation Guide
Overview
This guide covers all supported methods for installing ComfyUI, including local installations on Windows, Linux, and macOS, as well as platform-specific considerations for NVIDIA, AMD, Intel, and Apple Silicon GPUs. ComfyUI is designed to be modular and works fully offline—the core will never download anything unless explicitly requested by the user.
Sources: README.md:1-50
Installation Methods Overview
ComfyUI supports multiple installation approaches to accommodate different user needs and technical expertise levels.
graph TD
A[ComfyUI Installation] --> B[Desktop Application]
A --> C[Windows Portable Package]
A --> D[Manual Installation]
D --> E[Windows]
D --> F[Linux]
D --> G[macOS]
E --> H[NVIDIA GPU]
E --> I[AMD GPU]
E --> J[Intel GPU]
F --> K[NVIDIA GPU]
F --> L[AMD ROCm]
F --> M[Intel XPU]
G --> N[Apple Silicon M1/M2]Prerequisites
System Requirements
| Component | Minimum | Recommended |
|---|---|---|
| GPU VRAM | 4GB | 8GB+ |
| RAM | 8GB | 16GB+ |
| Disk Space | 10GB | 20GB+ |
| OS | Windows 10, Linux, macOS | Windows 11, Latest Linux/macOS |
GPU Support Matrix
| GPU Vendor | Support Level | Backend |
|---|---|---|
| NVIDIA | Full | CUDA (cu130/cu132) |
| AMD | Full (ROCm) | ROCm |
| Intel | Full (XPU) | oneAPI |
| Apple Silicon | Full | Metal/MPS |
Sources: README.md:200-280
PyTorch Installation
PyTorch is the core dependency required for ComfyUI. The installation command varies by hardware platform.
NVIDIA GPUs
For stable PyTorch with CUDA support:
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu130
For nightly builds with potential performance improvements:
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu132
Sources: README.md:180-195
AMD GPUs (ROCm)
For AMD GPUs using ROCm, install the ROCm-compatible PyTorch build:
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm
For experimental memory-efficient attention on recent PyTorch with AMD GPUs:
TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL=1 python main.py --use-pytorch-cross-attention
For non-officially supported AMD cards, use environment variable overrides:
| GPU Series | Command |
|---|---|
| AMD 6700, 6600 (RDNA2) | HSA_OVERRIDE_GFX_VERSION=10.3.0 python main.py |
| AMD 7600 (RDNA3) | HSA_OVERRIDE_GFX_VERSION=11.0.0 python main.py |
Additional performance tuning options:
PYTORCH_TUNABLEOP_ENABLED=1 python main.py
Sources: README.md:220-260
Intel GPUs (XPU)
For Intel discrete GPUs and APUs using the XPU backend:
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/xpu
For nightly builds with potential improvements:
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/xpu
Sources: README.md:160-178
Apple Silicon (M1/M2)
- Install the latest PyTorch nightly following Apple's Accelerated PyTorch training on Mac developer guide.
- Follow the manual installation instructions for your operating system.
- Install ComfyUI dependencies as specified in the Dependencies section.
Sources: README.md:290-310
Troubleshooting PyTorch
If you encounter the error "Torch not compiled with CUDA enabled":
pip uninstall torch
Then reinstall using the appropriate command for your hardware from the sections above.
Sources: README.md:196-199
Dependencies Installation
After installing PyTorch, install the core ComfyUI dependencies:
pip install -r requirements.txt
This installs all required Python packages for ComfyUI to function properly. After this step, ComfyUI should be ready to run.
Sources: README.md:286-288
Windows Portable Package
For Windows users seeking a portable, self-contained installation:
- Download the portable standalone build from the releases page.
- Extract the archive to your desired location.
- Run
python main.pyor the provided executable.
This package includes everything needed to run ComfyUI on NVIDIA GPUs or in CPU-only mode.
Sources: README.md:95-110
Manual Installation
Windows and Linux
graph LR
A[Download/Clone Repository] --> B[Install PyTorch]
B --> C[Install Dependencies]
C --> D[Configure Model Paths]
D --> E[Launch ComfyUI]#### Step 1: Clone or Download the Repository
git clone https://github.com/Comfy-Org/ComfyUI.git
cd ComfyUI
#### Step 2: Install PyTorch
Follow the PyTorch installation instructions for your GPU in the PyTorch Installation section above.
#### Step 3: Install Dependencies
pip install -r requirements.txt
#### Step 4: Launch
python main.py
Sources: README.md:280-295
Model Path Configuration
ComfyUI supports an optional configuration file to set custom search paths for models, useful if you have models stored in a different location or shared across multiple installations.
Copy the example configuration:
cp extra_model_paths.yaml.example extra_model_paths.yaml
Edit extra_model_paths.yaml to specify your model directories:
# Example extra_model_paths.yaml
models:
checkpoints: /path/to/your/checkpoints
loras: /path/to/your/loras
vae: /path/to/your/vae
Sources: extra_model_paths.yaml.example()
ComfyUI-Manager
ComfyUI-Manager is an extension that simplifies installation, updating, and management of custom nodes.
Installation
- Navigate to your ComfyUI installation directory
- Clone the ComfyUI-Manager repository into the
custom_nodesfolder:
cd custom_nodes
git clone https://github.com/Comfy-Org/ComfyUI-Manager.git
- Install manager dependencies:
pip install -r manager_requirements.txt
Sources: README.md:330-345
Enabling ComfyUI-Manager
Start ComfyUI with the --enable-manager flag:
python main.py --enable-manager
Manager Command Line Options
| Flag | Description |
|---|---|
--enable-manager | Enable ComfyUI-Manager |
--enable-manager-legacy-ui | Use the legacy manager UI (requires --enable-manager) |
--disable-manager-ui | Disable manager UI while keeping background features (requires --enable-manager) |
Sources: README.md:346-365
Desktop Application
For the easiest getting-started experience, download the official Desktop Application:
- Available for Windows and macOS
- Download from comfy.org/download
This method requires no technical configuration and is recommended for new users.
Sources: README.md:55-65
Cloud Deployment
For users without local hardware, ComfyUI is available on Comfy Cloud:
- Official paid cloud version hosted at comfy.org/cloud
- No local hardware required
- Full ComfyUI functionality
Sources: README.md:66-70
Advanced Configuration
Multi-User Setup
For server deployments with multiple users, enable multi-user mode:
python main.py --multi-user
This enables server-side user profile storage instead of browser-based storage.
Sources: app/user_manager.py:25-35
Frontend Version Management
ComfyUI ships its frontend as a separate pip package. To specify a frontend version:
python main.py --front-end-version Comfy-Org/ComfyUI_frontend@latest
For stable releases:
python main.py --front-end-version Comfy-Org/[email protected]
For legacy frontend:
python main.py --front-end-version Comfy-Org/ComfyUI_legacy_frontend@latest
Sources: app/frontend_management.py:40-75
Additional Command Line Arguments
| Argument | Description |
|---|---|
--preview-method auto | Enable previews with automatic method selection |
--preview-method taesd | Use TAESD for high-quality previews |
--tls-keyfile <file> | Path to TLS private key |
--tls-certfile <file> | Path to TLS certificate |
--use-pytorch-cross-attention | Use PyTorch cross-attention implementation |
--disable-api-nodes | Disable optional API nodes |
Sources: README.md:15-45
Post-Installation Verification
After installation, verify your setup by:
- Launching ComfyUI:
python main.py - Opening the web interface (typically
http://localhost:8188) - Running a simple workflow to confirm GPU acceleration is working
If previews are enabled, you should see latent preview updates during image generation, confirming the installation is functioning correctly.
Sources: README.md:10-20
Common Issues
| Issue | Solution |
|---|---|
| "Torch not compiled with CUDA enabled" | Reinstall PyTorch with CUDA support |
| Import errors | Run pip install -r requirements.txt |
| Model not found | Configure extra_model_paths.yaml or check model paths |
| Manager installation fails | Ensure manager_requirements.txt dependencies are installed |
Sources: README.md:196-199
Sources: [README.md:1-50]()
System Architecture
Related topics: Server System, Execution Engine, Model Loading and Detection
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Server System, Execution Engine, Model Loading and Detection
System Architecture
Overview
ComfyUI is a modular AI creation engine designed with a node-graph architecture that enables complex workflow orchestration for generative AI models. The system architecture follows a client-server model where the backend provides REST API endpoints for workflow execution, model management, and user administration, while the frontend communicates via WebSocket and HTTP protocols to render the visual node editor and manage execution state.
Sources: README.md
High-Level Architecture
graph TD
subgraph Client
Frontend["Web Frontend<br/>(React-based)"]
end
subgraph Server["ComfyUI Server"]
API["REST API Routes"]
WS["WebSocket Handler"]
Execution["Execution Engine"]
UserMgr["User Manager"]
ModelMgr["Model Manager"]
end
subgraph Storage
Models["Model Files"]
Settings["User Settings"]
Cache["File Cache"]
end
Frontend <-->|HTTP/WS| API
Frontend <-->|WS| WS
API <--> UserMgr
API <--> ModelMgr
Execution <--> Models
UserMgr <--> Settings
ModelMgr <--> CacheCore Components
Execution Engine
The execution engine is the computational core of ComfyUI, responsible for processing node graphs in topological order. It implements intelligent caching where only parts of the graph that have changed between executions are re-processed.
Key Characteristics:
- Only parts of the graph that have an output with all the correct inputs will be executed
- Only parts of the graph that change from each execution to the next will be executed
- If the same graph is submitted twice, only the first will be executed
- If the last part of the graph changes, only that part and its dependents are re-executed
Sources: README.md
User Manager
The UserManager class handles multi-user support and user-specific settings storage.
classDiagram
class UserManager {
+settings: AppSettings
+users: dict
+__init__()
+get_users_file(): str
}
class AppSettings {
+__init__(user_manager)
+get_default_user(): str
}User Configuration:
| Parameter | Description | Default |
|---|---|---|
multi_user | Enable multiple user profiles | False |
| User Directory | Location for user-specific data | folder_paths.get_user_directory() |
Initialization Logic:
# Single-user mode (default)
self.users = {"default": "default"}
# Multi-user mode (with --multi-user flag)
if os.path.isfile(self.get_users_file()):
with open(self.get_users_file()) as f:
self.users = json.load(f)
Sources: app/user_manager.py:1-50
Model Manager
The ModelFileManager class provides centralized model file discovery and caching.
graph LR
A[Model Request] --> B[Cache Check]
B -->|Hit| C[Return Cached]
B -->|Miss| D[Scan Directories]
D --> E[Build File List]
E --> F[Cache Result]
F --> CCache Data Structure:
| Field | Type | Description |
|---|---|---|
key | str | Cache identifier |
value | tuple[list[dict], dict[str, float], float] | Models list, metadata, timestamp |
Model Discovery Features:
- Recursive directory scanning with glob patterns
- Safe file filtering by extension and content type
- Support for safetensors metadata extraction
- Preview image detection (
*.previewfiles)
Sources: app/model_manager.py:1-80
Frontend Management
The FrontendManagement class handles frontend version control and installation verification.
Version Parsing Pattern:
{provider}/{repo}@{version}
Example: Comfy-Org/[email protected]
Validation Regex:
^([a-zA-Z0-9][a-zA-Z0-9-]{0,38})/([a-zA-Z0-9_.-]+)@(v?\d+\.\d+\.\d+[-._a-zA-Z0-9]*|latest|prerelease)$
Package Discovery:
| Package Type | Purpose |
|---|---|
comfyui-frontend-package | Main frontend assets |
comfyui-workflow-templates | Workflow template files |
comfyui-embedded-docs | Embedded documentation |
Sources: app/frontend_management.py:1-100
API Routes Architecture
REST Endpoints
graph TD
R1["GET /v2/userdata"] --> UM[UserManager]
R2["GET /experiment/models"] --> MM[ModelFileManager]
R3["GET /experiment/models/{folder}"] --> MMFile Listing Parameters:
| Parameter | Type | Description |
|---|---|---|
path | str | Relative path within data directory |
recurse | bool | Enable recursive directory traversal |
full_info | bool | Return full file metadata |
split | bool | Return path as array (split by /) |
Response Format:
class FileInfo(TypedDict):
path: str # Relative file path
size: int # File size in bytes
modified: int # Modification time (milliseconds)
created: int # Creation time (milliseconds)
Sources: app/user_manager.py:60-100
Type System Architecture
ComfyUI implements a comprehensive type hinting system for node development.
classDiagram
class ComfyNodeABC {
<<abstract>>
+INPUT_TYPES: InputTypeDict
}
class IO {
<<enumeration>>
ANY = "*"
NUMBER = "FLOAT,INT"
PRIMITIVE = "STRING,FLOAT,INT,BOOLEAN"
}
ComfyNodeABC --> IOBuilt-in IO Types:
| Type | Value | Description |
|---|---|---|
ANY | "*" | Accepts any input type |
NUMBER | "FLOAT,INT" | Numeric values |
PRIMITIVE | "STRING,FLOAT,INT,BOOLEAN" | Basic data types |
Sources: comfy/comfy_types/README.md
Configuration and CLI Arguments
Command Line Options
| Flag | Description |
|---|---|
--enable-manager | Enable ComfyUI-Manager |
--enable-manager-legacy-ui | Use legacy manager UI |
--disable-manager-ui | Disable manager UI (keep background features) |
--disable-api-nodes | Disable optional API nodes |
--preview-method {auto,taesd} | Preview generation method |
--front-end-version | Specify frontend version |
Sources: README.md
Environment Variables
| Variable | Purpose | Example |
|---|---|---|
HSA_OVERRIDE_GFX_VERSION | AMD GPU compatibility | 10.3.0 for RDNA2 |
TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL | ROCm memory optimization | 1 |
PYTORCH_TUNABLEOP_ENABLED | PyTorch tuning | 1 |
Sources: README.md
Data Flow: Workflow Execution
sequenceDiagram
participant Client
participant API
participant Execution
participant Cache
participant Models
Client->>API: Submit Workflow Graph
API->>Execution: Parse Graph
Execution->>Cache: Check Node States
Cache-->>Execution: Cached Results
Execution->>Models: Load Required Models
Models-->>Execution: Model Data
Execution->>Execution: Topological Sort
Execution->>Execution: Execute Changed Nodes
Execution-->>API: Output Results
API-->>Client: WebSocket UpdateNode Graph Structure
ComfyUI workflows are represented as directed acyclic graphs (DAGs) where:
- Nodes represent computational units (e.g., model loading, sampling, encoding)
- Edges represent data flow between nodes
- Execution Order is determined by topological sorting based on input dependencies
graph LR
subgraph Inputs
Model["Model Loader"]
Clip["CLIP Text Encode"]
Latent["Empty Latent"]
end
subgraph Process
Sampler["KSampler"]
end
subgraph Outputs
Decode["VAE Decode"]
Image["Save Image"]
end
Model --> Sampler
Clip --> Sampler
Latent --> Sampler
Sampler --> Decode
Decode --> ImageRelease Process Architecture
ComfyUI maintains three interconnected repositories with different release cadences:
| Repository | Branch | Release Cycle | Purpose |
|---|---|---|---|
| ComfyUI Core | master | ~2 weeks | Major stable releases |
| ComfyUI Core | tags | as needed | Patch fixes for stable |
| Frontend | various | weekly | UI updates |
Versioning Scheme:
- Major versions (e.g., v0.7.0) for significant releases
- Minor versions for master branch releases
- Patch versions for backported fixes
Sources: README.md
Security Considerations
Multi-User Mode
When --multi-user is enabled:
- User settings are stored server-side instead of browser local storage
- Each user has isolated data directories
- User settings persist across sessions
File Access Control
The /v2/userdata endpoint implements path validation:
- Prevents directory traversal attacks
- Validates paths are within user's data directory
- Returns appropriate HTTP status codes (400, 404) for invalid requests
Sources: app/user_manager.py:80-120
Sources: [README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)
Server System
Related topics: System Architecture, Execution Engine
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: System Architecture, Execution Engine
Server System
Overview
The ComfyUI Server System is the core backend infrastructure responsible for handling client connections, executing workflows, managing files, and orchestrating the AI generation pipeline. Built on top of aiohttp, the server provides both REST API endpoints and WebSocket-based real-time communication for seamless interaction between the frontend interface and backend processing engines.
The server acts as the central hub that manages:
- Client connections via WebSocket protocol
- Workflow execution scheduling and queue management
- File operations for models, outputs, and user data
- Frontend delivery and management
- User authentication and multi-user support
Sources: server.py | protocol.py
Architecture Overview
graph TB
subgraph "Client Layer"
Frontend[Frontend UI]
ExternalAPI[External API Clients]
end
subgraph "Server Core"
WSS[WebSocket Server]
REST[REST API Routes]
Auth[Authentication Layer]
end
subgraph "Services Layer"
Exec[Execution Engine]
Terminal[Terminal Service]
FileOps[File Operations]
Queue[Queue Manager]
end
subgraph "Data Layer"
Models[Model Manager]
Users[User Manager]
Settings[App Settings]
end
Frontend --> WSS
ExternalAPI --> REST
WSS --> Auth
REST --> Auth
Auth --> Exec
Exec --> Queue
Exec --> Terminal
FileOps --> Models
FileOps --> Users
FileOps --> SettingsProtocol Layer
WebSocket Protocol
The ComfyUI server uses a custom WebSocket-based protocol for real-time communication between the client and server. This protocol enables:
- Bidirectional messaging - Both client and server can send messages independently
- Execution events - Real-time updates on workflow execution progress
- Prompt submission - Sending workflows for execution
- History tracking - Recording and retrieving execution history
Sources: protocol.py
Message Types
| Message Type | Direction | Purpose |
|---|---|---|
executing | Server → Client | Notification when a node begins execution |
executed | Server → Client | Notification when a node completes execution |
execution_error | Server → Client | Reports errors during workflow execution |
progress | Server → Client | Progress updates for long-running operations |
executing_node | Server → Client | Identifies currently executing node |
prompt | Client → Server | Submit workflow for execution |
interrupt | Client → Server | Request to interrupt current execution |
Server Core Components
Main Server Entry Point
The server.py file contains the main server initialization and lifecycle management. Key responsibilities include:
- Initializing the aiohttp web application
- Registering routes and middleware
- Setting up WebSocket endpoints
- Managing server lifecycle (start, stop, restart)
# Server initialization pattern
app = web.Application()
server = Server()
server.setup_routes(app)
web.run_app(app, host=host, port=port)
Sources: server.py
API Routes Structure
The server organizes routes into logical namespaces:
| Route Namespace | Purpose |
|---|---|
/api | Public REST API endpoints |
/internal | Internal server-to-server communication |
/v2/userdata | User data management endpoints |
/experiment | Experimental features |
Internal Routes
Internal routes under /internal are designated for ComfyUI's internal use only and may change without notice. These routes handle:
- System-level operations
- Queue management
- Execution state tracking
- Server configuration
Sources: api_server/routes/internal/internal_routes.py
Services Layer
Terminal Service
The Terminal Service manages pseudo-terminal functionality for executing external processes. This service is crucial for:
- Running Python scripts within workflows
- Executing system commands
- Managing subprocess lifecycle
The service provides:
- PTY (pseudo-terminal) allocation
- Stream multiplexing
- Process lifecycle management
Sources: api_server/services/terminal_service.py
File Operations
The file operations module provides utilities for:
| Operation | Description |
|---|---|
| Directory listing | Recursive and non-recursive file traversal |
| File metadata | Size, creation time, modification time |
| Path validation | Security checks for path traversal |
| User data access | Isolated access to user-specific directories |
# File info structure returned by file operations
class FileInfo(TypedDict):
path: str # Relative path from base directory
size: int # File size in bytes
modified: int # Modification timestamp (milliseconds)
created: int # Creation timestamp (milliseconds)
The list_userdata_v2 endpoint provides structured access to user data directories with proper security constraints.
Sources: api_server/utils/file_operations.py
Queue Manager
The queue manager handles workflow scheduling:
- Priority queuing - Higher priority prompts execute first
- Execution caching - Identical graphs skip re-execution
- Partial execution - Only changed portions of graphs execute
Execution behavior notes:
- Only parts of the graph with all correct inputs will be executed
- Only parts that change between executions are re-run
- Submitting the same graph twice results in only the first execution
Data Management
User Manager
The UserManager handles multi-user support and user settings:
- User directory management - Isolated storage per user
- Settings persistence - Server-side storage instead of browser localStorage
- Multi-user mode - Enabled via
--multi-userCLI flag
| Setting | Description |
|---|---|
multi_user | CLI argument to enable multiple user profiles |
user_directory | Base directory for user-specific data |
users_file | JSON file storing user configurations |
User data is stored in the user directory with each user having isolated access to their own data.
Sources: app/user_manager.py
Model Manager
The ModelFileManager provides:
- Model discovery - Listing models by type and folder
- Metadata extraction - Reading safetensors headers for preview images
- Preview generation - Supporting preview thumbnails for models
| Feature | Supported Formats |
|---|---|
| Preview Images | PNG, JPG, WebP |
| Model Metadata | safetensors headers |
| Preview Thumbnails | Base64-encoded in safetensors metadata |
The /experiment/models endpoint provides a structured listing of available model types and folders.
Sources: app/model_manager.py
Frontend Management
Frontend management handles the web UI delivery:
- Version management - Supports specific versions, nightly builds, or stable releases
- Custom frontends - Allows loading frontends from external repositories
- Embedded docs - Integration with embedded documentation package
# Example: Using specific frontend version
--front-end-version Comfy-Org/[email protected]
# Using legacy frontend
--front-end-version Comfy-Org/ComfyUI_legacy_frontend@latest
| Frontend Provider | Description |
|---|---|
| PyPI (stable) | Default stable releases |
| GitHub | Cutting-edge daily updates |
| Custom | Repository-specific versions |
Sources: app/frontend_management.py
Security Model
User Data Isolation
The server implements strict user data isolation:
- Each user has a dedicated data directory
- Path traversal attacks are prevented via
glob.escape() - User data endpoints validate paths against allowed directories
- Multi-user mode requires explicit CLI activation
Internal Routes Protection
Routes under /internal are explicitly marked as:
- Not intended for external application use
- Subject to change without notice
- Internal ComfyUI functionality only
Configuration
CLI Arguments
| Argument | Description |
|---|---|
--enable-manager | Enable ComfyUI-Manager extension |
--enable-manager-legacy-ui | Use legacy manager UI |
--disable-manager-ui | Disable manager UI while keeping background features |
--multi-user | Enable multiple user profiles |
--front-end-version | Specify frontend version |
--preview-method | Set preview generation method (auto, taesd) |
--tls-keyfile | TLS private key file path |
--tls-certfile | TLS certificate file path |
Environment Variables
| Variable | Purpose |
|---|---|
TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL | Enable experimental ROCm features |
PYTORCH_TUNABLEOP_ENABLED | Enable PyTorch tuning for potential speed improvements |
HSA_OVERRIDE_GFX_VERSION | Override AMD GPU architecture detection |
Execution Flow
sequenceDiagram
participant Client
participant Server
participant Queue
participant Executor
Client->>Server: WebSocket Connect
Server->>Client: Connection Acknowledged
Client->>Server: Submit Prompt (workflow)
Server->>Queue: Add to execution queue
Server->>Client: Queue position acknowledged
Loop Execution
Queue->>Executor: Dequeue next task
Executor->>Executor: Execute node(s)
Executor->>Server: Progress updates
Server->>Client: Real-time execution events
alt Node executes successfully
Executor->>Server: Node completed
Server->>Client: "executed" message
else Execution error
Executor->>Server: Error details
Server->>Client: "execution_error" message
end
end
Executor->>Server: All nodes complete
Server->>Client: Execution completeSummary
The ComfyUI Server System provides a robust, event-driven architecture for AI workflow execution. Built on aiohttp, it combines:
- WebSocket-based real-time communication for interactive execution monitoring
- RESTful API endpoints for external integration
- Service-oriented design for modularity and maintainability
- Strong security boundaries through user isolation and path validation
The server seamlessly integrates with the frontend to deliver a responsive user experience while managing complex AI model execution pipelines in the background.
Sources: [server.py]() | [protocol.py]()
Execution Engine
Related topics: Graph Management, Server System, Memory Management
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Graph Management, Server System, Memory Management
Execution Engine
The Execution Engine is the core component of ComfyUI responsible for processing node-based workflows. It analyzes the dependency graph, determines execution order, and runs only the nodes necessary to produce the requested outputs.
Overview
ComfyUI uses a directed acyclic graph (DAG) model where each node represents an operation and edges represent data dependencies. The execution engine processes this graph efficiently by:
- Executing only nodes with all required inputs available
- Skipping unchanged portions of the graph on re-execution
- Caching intermediate results to avoid redundant computation
Sources: README.md:1-50
Execution Model
Lazy Evaluation Strategy
The execution engine employs lazy evaluation, meaning nodes are only executed when their outputs are actually needed by other nodes or requested by the user.
graph TD
A[User Request] --> B{Output Cached?}
B -->|Yes| C[Return Cached Result]
B -->|No| D[Find All Dependent Nodes]
D --> E[Check Input Availability]
E --> F[Execute Required Nodes]
F --> G[Cache Results]
G --> CIncremental Execution
One of the most powerful features of the execution engine is its ability to perform incremental execution:
- If the same workflow is submitted twice, only the first execution runs
- If only part of the graph changes, only that part and its downstream dependencies are re-executed
- This dramatically improves performance for iterative workflows
"Only parts of the graph that change from each execution to the next will be executed, if you submit the same graph twice only the first will be executed. If you change the last part of the graph only the part you changed and the part that depends on it will be executed."
Sources: README.md:1-50
Node Execution
Input Validation
Before any node executes, the engine validates that all required inputs are present and correctly typed. Nodes that cannot satisfy their input requirements are skipped from execution.
Dependency Resolution
The execution engine uses topological sorting to determine the correct order of node execution, ensuring that all input dependencies are satisfied before a node runs.
Caching System
ComfyUI implements a sophisticated caching mechanism to avoid redundant computation.
Cache Structure
The ModelFileManager class manages caching with the following structure:
self.cache: dict[str, tuple[list[dict], dict[str, float], float]] = {}
Each cache entry contains:
- A list of dictionaries with file information
- A dictionary mapping file paths to modification timestamps
- A float representing cache creation time
Sources: app/model_manager.py:1-50
Cache Operations
| Operation | Method | Description |
|---|---|---|
| Get Cache | get_cache(key, default) | Retrieves cached data by key |
| Set Cache | set_cache(key, value) | Stores data in cache |
| Clear Cache | clear_cache() | Removes all cached entries |
Sources: app/model_manager.py:1-50
API Endpoints
The execution engine interacts with the following API endpoints for model and file management:
Model Routes
| Endpoint | Method | Purpose |
|---|---|---|
/experiment/models | GET | List all available model folders |
/experiment/models/{folder} | GET | List all models in a specific folder |
File Routes
| Endpoint | Method | Purpose |
|---|---|---|
/files | GET | List files in a directory |
/v2/userdata | GET | List user data directory contents |
The file listing endpoint supports query parameters:
path: Relative path within the data directoryrecurse: Enable recursive directory traversalfull_info: Return detailed file informationsplit: Return path segments as array elements
Sources: app/user_manager.py:1-50 Sources: app/model_manager.py:50-100
Node Type System
ComfyUI uses a typed node system defined in comfy/comfy_types/:
Core Types
| Type | Description |
|---|---|
IO.ANY | Accepts any input type ("*") |
IO.NUMBER | Numeric values (FLOAT, INT) |
IO.PRIMITIVE | Basic types (STRING, FLOAT, INT, BOOLEAN) |
Base Class
The ComfyNodeABC abstract base class provides:
- Type hinting support
- Autocomplete for node developers
- Standardized
INPUT_TYPESinterface
Sources: comfy/comfy_types/README.md:1-50
Workflow Processing
File Operations
Workflows can be loaded from multiple formats:
- PNG files with embedded workflow data
- WebP images
- FLAC audio files
- JSON workflow files
Dragging a generated PNG onto the webpage automatically extracts the full workflow including seeds.
Sources: README.md:1-50
Dynamic Prompts
The execution engine supports dynamic prompt syntax:
| Syntax | Description | ||
|---|---|---|---|
(text:1.2) | Increase emphasis (1.2x) | ||
(text:0.8) | Decrease emphasis (0.8x) | ||
| `{wild\ | card\ | test}` | Random selection |
\\( | Escape parentheses | ||
\\{ | Escape braces |
Frontend Integration
The execution engine works with frontend version management to ensure compatibility:
Version String Format
provider/repository@version
Example: Comfy-Org/[email protected]
Version Pattern
^([a-zA-Z0-9][a-zA-Z0-9-]{0,38})/([a-zA-Z0-9_.-]+)@(v?\d+\.\d+\.\d+[-._a-zA-Z0-9]*|latest|prerelease)$
Sources: app/frontend_management.py:1-50
Performance Optimizations
Graph Optimization
The execution engine optimizes performance through:
- Dependency Analysis: Identifies minimum required nodes
- Caching: Stores intermediate computation results
- Incremental Updates: Skips unchanged graph portions
- Lazy Evaluation: Only computes when outputs are needed
Parallel Execution
While nodes within the same dependency level may have execution order constraints, the engine is designed to support parallel execution where possible.
Error Handling
The execution engine provides graceful error handling:
- Invalid paths return appropriate HTTP status codes (400, 404)
- Missing requirements are logged with installation instructions
- The system can continue operating even if optional components are unavailable
Sources: app/frontend_management.py:1-50
Sources: [README.md:1-50]()
Graph Management
Related topics: Execution Engine
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Execution Engine
Graph Management
Overview
Graph Management is a core system in ComfyUI that handles the creation, execution, caching, and manipulation of node-based computational graphs. The system orchestrates how nodes are executed, how workflows are processed, and how subgraphs are managed across the application. ComfyUI's node graph interface enables users to experiment and create complex Stable Diffusion workflows without needing to code, making graph management essential for both the UI layer and the execution engine.
The graph management system encompasses several interconnected components: the execution engine that processes node graphs, subgraph management for reusable workflow components, node replacement for runtime optimizations, and type hinting infrastructure for node development. Only parts of the graph that have an output with all the correct inputs will be executed, and only parts that change from each execution to the next will be re-executed, significantly optimizing performance for iterative workflows.
Core Architecture
graph TD
A[User Workflow] --> B[Graph Execution Engine]
B --> C[Node Execution]
B --> D[Subgraph Manager]
B --> E[Node Replace Manager]
C --> F[Graph Utils]
D --> G[Custom Node Subgraphs]
D --> H[Blueprint Subgraphs]
E --> I[Registered Replacements]
F --> J[Graph Optimization]Subgraph Management
Purpose and Scope
The Subgraph Manager handles the registration, loading, and lifecycle of reusable workflow components called subgraphs. Subgraphs are self-contained node definitions stored as JSON files that can be imported and used within larger workflows. This system enables code modularity and reuse, allowing custom node developers to package complex node arrangements as single, reusable units.
The manager supports two distinct sources of subgraphs:
| Source | Description | Path Location |
|---|---|---|
custom_node | Subgraphs bundled with custom node extensions | <custom_node_dir>/subgraphs/<name>.json |
templates | Built-in workflow templates | blueprints/ directory |
Data Models
#### Source Enum
class Source:
custom_node = "custom_node"
templates = "templates"
#### SubgraphEntry Structure
| Field | Type | Description |
|---|---|---|
source | str | Source identifier - custom_node or templates |
path | str | Relative path of the subgraph file |
name | str | Name of the subgraph file (without extension) |
info | CustomNodeSubgraphEntryInfo | Additional metadata (node pack name for custom nodes) |
data | str | Raw JSON content of the subgraph |
Sources: app/subgraph_manager.py:1-45
#### CustomNodeSubgraphEntryInfo
class CustomNodeSubgraphEntryInfo(TypedDict):
node_pack: str
"""Node pack name."""
Caching Strategy
The Subgraph Manager implements a caching mechanism to avoid redundant filesystem operations:
class SubgraphManager:
def __init__(self):
self.cached_custom_node_subgraphs: dict[SubgraphEntry] | None = None
self.cached_blueprint_subgraphs: dict[SubgraphEntry] | None = None
The cache is invalidated when force_reload=True is passed to the retrieval methods, enabling refresh during custom node reload scenarios.
Entry Generation
Each subgraph entry is assigned a unique identifier generated via SHA-256 hash:
def _create_entry(self, file: str, source: str, node_pack: str) -> tuple[str, SubgraphEntry]:
"""Create a subgraph entry from a file path. Expects normalized path (forward slashes)."""
entry_id = hashlib.sha256(f"{source}{file}".encode()).hexdigest()
entry: SubgraphEntry = {
"source": source,
"name": os.path.splitext(os.path.basename(file))[0],
"path": file,
...
}
Sources: app/subgraph_manager.py:57-70
REST API Endpoints
| Endpoint | Method | Description |
|---|---|---|
/global_subgraphs | GET | Returns all subgraphs with optional data stripping |
/global_subgraphs/{id} | GET | Returns a specific subgraph by its SHA-256 ID |
The get_all_subgraphs method merges results from both custom nodes and blueprints:
async def get_all_subgraphs(self, loadedModules, force_reload=False):
"""Get all subgraphs from all sources (custom nodes and blueprints)."""
custom_node_subgraphs = await self.get_custom_node_subgraphs(loadedModules, force_reload)
blueprint_subgraphs = await self.get_blueprint_subgraphs(force_reload)
return {**custom_node_subgraphs, **blueprint_subgraphs}
Node Replacement Management
Purpose
The Node Replace Manager registers runtime node substitutions that occur during graph execution. This system enables custom nodes to declare that certain node types should be replaced with alternative implementations, facilitating backward compatibility, optimization, and feature expansion without modifying existing workflows.
Registration Interface
class NodeReplaceManager:
"""Manages node replacement registrations."""
def __init__(self):
self._replacements: dict[str, list[NodeReplace]] = {}
def register(self, node_replace: NodeReplace):
"""Register a node replacement mapping.
Idempotent: if a replacement with the same (old_node_id, new_node_id)
is already registered, the duplicate is ignored. This prevents stale
entries from accumulating when custom nodes are reloaded in the same
process (e.g. via ComfyUI-Manager).
"""
Sources: app/node_replace_manager.py:25-40
Idempotent Registration
The registration process is designed to be idempotent, preventing duplicate entries when custom nodes are reloaded:
existing = self._replacements.setdefault(node_replace.old_node_id, [])
for entry in existing:
if entry.new_node_id == node_replace.new_node_id:
logging.debug(
"Node replacement %s -> %s already registered, ignoring duplicate.",
...
)
This design prevents stale entries from accumulating during custom node reloads triggered by ComfyUI-Manager.
Node Type System
IO Types
ComfyUI provides a standardized type system through the IO enum for node input/output definitions:
| Type | Value | Description |
|---|---|---|
ANY | "*" | Accepts any type |
NUMBER | "FLOAT,INT" | Numeric values |
PRIMITIVE | "STRING,FLOAT,INT,BOOLEAN" | Basic data types |
Sources: comfy/comfy_types/README.md
ComfyNodeABC Base Class
The abstract base class provides type-hinting and autocomplete support for node developers:
class ExampleNode(ComfyNodeABC):
@classmethod
def INPUT_TYPES(s) -> InputTypeDict:
return {"required": {}}
Graph Execution Model
Execution Optimization
ComfyUI's graph execution follows specific rules that optimize performance:
- Complete Input Requirement: Only parts of the graph that have an output with all the correct inputs will be executed.
- Incremental Execution: Only parts of the graph that change from each execution to the next will be executed. If you submit the same graph twice, only the first will be executed. If you change the last part of the graph, only the part you changed and the part that depends on it will be executed.
This model significantly reduces computational overhead for iterative workflows where users make incremental adjustments.
Workflow Serialization
Workflows can be saved and loaded as JSON files, enabling persistence and sharing of node graph configurations. Dragging a generated PNG on the webpage or loading one will give the full workflow including seeds that were used to create it, maintaining reproducibility.
Node Struct Operations
NodeStruct Definition
class NodeStruct(TypedDict):
inputs: dict[str, str | int | float | bool | tuple[str, int]]
class_type: str
_meta: dict[str, str]
Copy Operations
The copy_node_struct function creates modified copies for graph manipulation:
def copy_node_struct(node_struct: NodeStruct, empty_inputs: bool = False) -> NodeStruct:
new_node_struct = node_struct.copy()
if empty_inputs:
new_node_struct["inputs"] = {}
else:
new_node_struct["inputs"] = node_struct["inputs"].copy()
new_node_struct["_meta"] = node_struct["_meta"].copy()
return new_node_struct
Sources: app/node_replace_manager.py:16-25
Related Components
| Component | File Path | Purpose |
|---|---|---|
| Graph Execution | comfy_execution/graph.py | Core graph execution engine |
| Graph Utilities | comfy_execution/graph_utils.py | Graph manipulation helpers |
| Node Helpers | node_helpers.py | Common node development utilities |
| Node Typing | comfy/comfy_types/node_typing.py | Type definitions for nodes |
| User Manager | app/user_manager.py | User data and file operations |
Best Practices
Node Development
- Use
ComfyNodeABCas the base class for custom nodes to leverage type-hinting - Properly define
INPUT_TYPESwith correct type annotations - Register node replacements idempotently to support hot-reloading
Workflow Optimization
- Structure workflows to minimize dependencies between unchanged sections
- Use subgraphs for reusable workflow patterns
- Leverage the incremental execution model by making changes at graph endpoints
Custom Node Packaging
- Place subgraphs in the designated
subgraphs/directory within custom node packages - Use the node pack name in
CustomNodeSubgraphEntryInfofor proper namespacing - Follow JSON format for subgraph definition files
Sources: [app/subgraph_manager.py:1-45]()
Model Loading and Detection
Related topics: Diffusion Models, Memory Management
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Diffusion Models, Memory Management
I do not have access to the source files required to generate this wiki page. The provided context does not contain the following files that are essential for documenting the Model Loading and Detection system:
Required files that are missing:
comfy/model_detection.pycomfy/model_management.pycomfy/model_patcher.pycomfy/model_base.pycomfy/supported_models.pycomfy/lora.pyfolder_paths.py
The context only includes:
README.md(general documentation)app/user_manager.py(user data management)app/model_manager.py(model file manager - partial)app/frontend_management.py(frontend version management)comfy/comfy_types/README.md(type hinting documentation)
Without access to the actual source files for model loading and detection, I cannot provide accurate technical details, code citations, or architectural diagrams that would be factual and useful.
To generate this wiki page, I would need the repository to be re-analyzed with the specific files listed above included in the context.
Source: https://github.com/Comfy-Org/ComfyUI / Human Manual
Diffusion Models
Related topics: Model Loading and Detection, Text Processing and Encoders
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Model Loading and Detection, Text Processing and Encoders
Diffusion Models
Overview
Diffusion models in ComfyUI are probabilistic generative models that learn to reverse a forward diffusion process. By gradually denoising random noise through a learned reverse process, these models generate high-quality images, videos, and audio from latent representations.
ComfyUI implements a modular architecture supporting multiple diffusion model families:
| Model Family | Domain | Primary File |
|---|---|---|
| Stable Diffusion | Image | comfy/ldm/modules/diffusionmodules/model.py |
| Stable Diffusion XL | Image | comfy/ldm/modules/diffusionmodules/openaimodel.py |
| Flux | Image | comfy/ldm/flux/model.py |
| Wan | Video | comfy/ldm/wan/model.py |
| Hunyuan Video | Video | comfy/ldm/hunyuan_video/model.py |
| CogVideo | Video | comfy/ldm/cogvideo/model.py |
Architecture
Core Diffusion Module Structure
graph TD
A[Latent Input] --> B[Diffusion Model]
B --> C[UNet Architecture]
C --> D[Time Embedding]
C --> E[Residual Blocks]
C --> F[Attention Layers]
D --> G[Denoised Output]
E --> G
F --> G
H[Sampler] --> I[Noise Schedule]
I --> B
G --> J[VAE Decode]
J --> K[Final Output]Supported Model Types
ComfyUI natively supports state-of-the-art open-source diffusion models across multiple domains:
#### Image Generation Models
| Model Type | Description | Documentation Link |
|---|---|---|
| Stable Diffusion 1.5 | Latent diffusion model for image generation | Examples |
| Stable Diffusion XL | Enhanced SD with improved quality | Included in core |
| SDXL Turbo / LCM | Fast convergence models | LCM Examples |
| Stable Diffusion 3 / Flux | MM-DiT architecture for superior quality | Flux Examples |
| Hunyuan DiT | Tencent's diffusion transformer | Included in core |
| Ollin | Custom high-quality diffusion | Available via community |
| Wan | Wan 2.1 and Wan 2.2 video models | Wan Examples |
| HiDream | Advanced image generation | HiDream Examples |
Sources: README.md
#### Video Generation Models
| Model Type | Description | Documentation Link |
|---|---|---|
| Stable Video Diffusion | Frame interpolation and video generation | Video Examples |
| Mochi | High-quality video synthesis | Mochi Examples |
| LTX-Video | Lightweight video diffusion | LTX Examples |
| Hunyuan Video | Tencent's video generation | Hunyuan Examples |
| Wan 2.1/2.2 | Comprehensive video models | Wan Examples |
Sources: README.md
#### Audio Models
| Model Type | Description |
|---|---|
| Stable Audio | Audio generation and synthesis |
Sources: README.md
#### Image Editing Models
| Model Type | Description | Link |
|---|---|---|
| Omnigen 2 | Unified image editing | Examples |
| Flux Kontext | In-context image editing | Examples |
| HiDream E1.1 | Advanced editing capabilities | Examples |
| Qwen Image Edit | Multi-modal editing | Examples |
Sources: README.md
Model Loading Architecture
Base Diffusion Model Files
| File | Purpose |
|---|---|
comfy/ldm/modules/diffusionmodules/model.py | Core SD1.5/SD2.x diffusion model implementation |
comfy/ldm/modules/diffusionmodules/openaimodel.py | SDXL and newer architecture variants |
comfy/ldm/flux/model.py | Flux/MM-DiT architecture implementation |
comfy/ldm/wan/model.py | Wan video diffusion model |
comfy/ldm/hunyuan_video/model.py | Hunyuan video diffusion |
comfy/ldm/cogvideo/model.py | CogVideo model implementation |
Model Loading Workflow
graph LR
A[Model Checkpoint] --> B[Model Loader Node]
B --> C[Load State Dict]
C --> D[Architecture Detection]
D --> E{Router}
E -->|SD 1.5/2.x| F[diffusionmodules/model.py]
E -->|SDXL| G[diffusionmodules/openaimodel.py]
E -->|Flux| H[flux/model.py]
E -->|Video| I[wan/hunyuan/cogvideo/model.py]Sampling System
Sampler Implementation
The sampling system is implemented in comfy/samplers.py and comfy/sample.py.
| Component | File | Function |
|---|---|---|
| SamplerFactory | comfy/samplers.py | Creates sampler instances |
| KSampler | comfy/samplers.py | Main sampling loop implementation |
| CFGGuider | comfy/samplers.py | Classifier-free guidance implementation |
| Sampler | comfy/sample.py | Orchestrates the sampling process |
Sampling Parameters
| Parameter | Type | Description |
|---|---|---|
| steps | int | Number of denoising steps |
| cfg | float | Classifier-free guidance scale |
| sampler_name | str | Sampler algorithm (e.g., euler, dpmpp_2m) |
| scheduler | str | Noise schedule type |
| denoise | float | Denoising strength (0.0-1.0) |
Available Samplers
ComfyUI supports multiple sampling algorithms:
| Sampler Category | Algorithms |
|---|---|
| Euler Family | euler, euler_ancestral, euler_a |
| DPM++ | dpmpp_2m, dpmpp_2m_karras, dpmpp_sde, dpmpp_sde_karras |
| DDIM | ddim |
| UniPC | unipc |
| LCM | lcm (for LCM/SDXL-Turbo models) |
Noise Schedules
| Scheduler | Description |
|---|---|
| normal | Standard noise schedule |
| karras | Optimized schedule for better quality |
| exponential | Exponential decay schedule |
| simple | Simplified schedule |
Advanced Features
Textual Inversion
ComfyUI supports textual inversion embeddings for style and concept customization.
Sources: README.md
LoRA Support
| LoRA Type | Description |
|---|---|
| Regular LoRA | Standard low-rank adaptation |
| LoCon | Location-aware conditioning |
| LoHa | Low-rank Hadamard product adaptation |
Sources: README.md
Hypernetworks
Custom hypernetworks can be loaded and applied to modify model behavior.
Sources: README.md
ControlNet and T2I-Adapter
Structural guidance for diffusion models through:
| Type | Description |
|---|---|
| ControlNet | Conditioning via additional neural networks |
| T2I-Adapter | Lightweight adapters for structure guidance |
Sources: README.md
Workflow Composition
Node Graph Architecture
graph TD
A[Load Checkpoint] --> B[CLIP Text Encode]
B --> C[KSampler]
A --> D[VAE Encode]
D --> C
C --> E[VAE Decode]
E --> F[Save Image]
G[Positive Prompt] --> B
H[Negative Prompt] --> BExample Workflows
| Workflow | Purpose | Link |
|---|---|---|
| txt2img | Text-to-image generation | Examples |
| img2img | Image-to-image transformation | Included in core |
| Hires Fix | Two-pass upscaling | Hires Fix |
| Inpainting | Selective regeneration | Inpaint |
| Area Composition | Multi-region composition | Area Composition |
| Upscale | Super-resolution | Upscale Models |
| Model Merging | Combine model weights | Model Merging |
| GLIGEN | Grounded generation | GLIGEN |
Sources: README.md
Performance Optimization
Latent Preview with TAESD
ComfyUI provides real-time preview capabilities using TAESD (Tiny AutoEncoder for Stable Diffusion):
| Feature | Description |
|---|---|
| Low-res Preview | Default fast latent preview |
| TAESD Preview | High-quality previews |
| --preview-method | CLI flag to select preview method |
To enable TAESD previews:
- Download decoder files from taesd repository:
- Place files in
models/vae_approxdirectory
- Launch with
--preview-method taesd
Sources: README.md
GPU Support
| Platform | Installation Command |
|---|---|
| NVIDIA (CUDA 12.1) | pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu121 |
| NVIDIA (CUDA 12.4) | pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu124 |
| NVIDIA (CUDA 12.6) | pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu126 |
| AMD (ROCm) | pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.1 |
| Intel (XPU) | pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/xpu |
| Apple Silicon | Install PyTorch nightly per Apple Developer Guide |
Sources: README.md
Memory Efficient Attention
For AMD GPUs with ROCm, experimental memory efficient attention can be enabled:
TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL=1 python main.py --use-pytorch-cross-attention
For potential speed improvements:
PYTORCH_TUNABLEOP_ENABLED=1 python main.py
Sources: README.md
Execution Model
Partial Graph Execution
ComfyUI's execution engine optimizes diffusion model runs:
Only parts of the graph that have an output with all the correct inputs will be executed.
Only parts of the graph that change from each execution to the next will be executed. If you submit the same graph twice, only the first will be executed. If you change the last part of the graph, only the part you changed and the part that depends on it will be re-executed.
Sources: README.md
Execution Flow
graph TD
A[Submit Workflow] --> B[Analyze Dependencies]
B --> C[Identify Executable Nodes]
C --> D[Execute Required Nodes]
D --> E[Cache Results]
E --> F[Return Outputs]
G[Submit Same Workflow] --> H{Cached?}
H -->|Yes| I[Skip Execution]
H -->|No| J[Execute Changed Nodes]
I --> F
J --> K[Update Cache]
K --> FAPI Integration
API Nodes
ComfyUI includes optional API nodes for accessing paid models from external providers through the official Comfy API.
To disable API nodes:
python main.py --disable-api-nodes
Sources: README.md
Offline Operation
ComfyUI works fully offline for core functionality:
Works fully offline: core will never download anything unless you want it to.
Sources: README.md
Release and Versioning
ComfyUI follows a structured release cycle:
| Release Type | Frequency | Description |
|---|---|---|
| Major Stable | ~Every 2 weeks | New stable versions (e.g., v0.7.0) |
| Patch | As needed | Backported fixes for stable releases |
| Nightly | Daily | Cutting-edge updates from master branch |
Commits outside of the stable release tags may be very unstable and break many custom nodes.
Sources: README.md
See Also
Sources: [README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)
Text Processing and Encoders
Related topics: Diffusion Models
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Diffusion Models
Text Processing and Encoders
Overview
Text processing and encoding in ComfyUI provides the mechanism to convert human-readable text prompts into numerical representations (embeddings) that can be consumed by diffusion models. This system supports various model architectures including SD1.x, SDXL, Flux, and modern multimodal models.
Sources: README.md
Architecture
graph TD
A[User Text Prompt] --> B[Text Encoding Nodes]
B --> C[CLIPTextEncode]
B --> D[CLIP Text Encode Hires]
B --> E[Model-Specific Encoders]
C --> F[CLIP Models]
E --> G[Flux Encoder]
E --> H[T5 Encoder]
E --> I[Llama Encoder]
F --> J[Embedding Tensors]
G --> J
H --> J
I --> J
J --> K[Diffusion Model]CLIP Models
The comfy/clip_model.py module provides the foundational CLIP model implementation used across different model variants.
Sources: comfy/clip_model.py
SD1 CLIP
The SD1 CLIP implementation (comfy/sd1_clip.py) handles text encoding for Stable Diffusion 1.x models.
Sources: comfy/sd1_clip.py
SDXL CLIP
The SDXL CLIP implementation (comfy/sdxl_clip.py) extends text encoding capabilities for SDXL models with additional prompt handling.
Sources: comfy/sdxl_clip.py
Text Encoders Module
The comfy/text_encoders/ directory contains specialized encoders for modern model architectures.
Sources: comfy/text_encoders/flux.py, comfy/text_encoders/t5.py, comfy/text_encoders/llama.py
Flux Encoder
Handles text encoding for Flux models, typically combining CLIP and T5 encodings.
T5 Encoder
Implements T5-based text encoding for models requiring transformer-based text processing.
Llama Encoder
Provides Llama-based text encoding for advanced text understanding capabilities.
Embeddings System
ComfyUI supports custom embeddings stored in the models/embeddings directory.
Sources: README.md
Using Custom Embeddings
Embeddings can be referenced in the CLIPTextEncode node using the following syntax:
embedding:embedding_filename.pt
The .pt extension can be omitted when specifying embeddings.
Model Integration
Text encoders are integrated into the broader model system through comfy/sd.py, which coordinates between different model components and their respective encoders.
Sources: comfy/sd.py
Supported Models
| Model Family | Text Encoder(s) | Notes |
|---|---|---|
| SD 1.x | CLIP | Standard text encoding |
| SDXL | CLIP | Dual CLIP support |
| Flux | CLIP + T5 | Combined encoding approach |
| HunyuanDiT | Custom | Model-specific implementation |
Text Encoding Workflow
graph LR
A1[Positive Prompt] --> B[CLIPTextEncode]
A2[Negative Prompt] --> C[CLIPTextEncode]
B --> D[Positive Embeddings]
C --> E[Negative Embeddings]
D --> F[KSampler]
E --> F
F --> G[Image Generation]Node Types
CLIPTextEncode
The primary node for encoding text prompts into embeddings.
Input Parameters:
text: The text prompt to encodeclip: The CLIP model to use for encoding
Output:
CONDITIONING: The encoded text representation
Specialized Encoding Nodes
| Node | Purpose | Use Case |
|---|---|---|
| CLIP Text Encode Hires | High-resolution aware encoding | Multi-pass workflows |
| Model-Specific Encode | Architecture-specific handling | Flux, SDXL, etc. |
Best Practices
- Prompt Formatting: Use proper syntax for weight adjustments (e.g.,
(text:1.2)) - Embedding Loading: Place custom embeddings in
models/embeddings - Model Matching: Ensure text encoder matches the generation model
- Batch Processing: Consider CLIP sequence length limitations
Related Components
- Model Management:
app/model_manager.pyhandles loading and caching of text encoder models - Type System:
comfy/comfy_types/provides type hints for node development including IO types for text processing
Sources: [README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)
Memory Management
Related topics: Model Loading and Detection, Execution Engine
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Model Loading and Detection, Execution Engine
Memory Management
ComfyUI implements a sophisticated smart memory management system that enables efficient execution of large AI models on hardware with limited VRAM. This system is fundamental to ComfyUI's ability to run complex workflows on consumer-grade GPUs.
Overview
The memory management subsystem in ComfyUI handles the lifecycle of model data in GPU and system memory. Its primary objectives include:
- Automatic model offloading: Dynamically moving models between GPU VRAM and system RAM
- VRAM optimization: Enabling execution on GPUs with as little as 1GB of VRAM
- Execution caching: Storing partial execution results to avoid redundant computation
- Memory cleanup: Properly releasing resources when models are no longer needed
Sources: README.md
Architecture Overview
graph TD
A[Workflow Execution] --> B[Memory Manager]
B --> C{VRAM Available?}
C -->|Yes| D[Load Model to GPU]
C -->|No| E[Smart Offloading]
E --> F[Partial GPU Loading]
F --> G[System RAM Swap]
D --> H[Execute Nodes]
G --> H
H --> I[Cache Results]
I --> J[Memory Cleanup]
J --> K[Free VRAM]Key Memory Management Features
Smart Offloading
ComfyUI can automatically run large models on GPUs with limited VRAM through intelligent offloading strategies. When a model exceeds available VRAM, the system selectively keeps portions of the model in GPU memory while swapping other components to system RAM.
Low VRAM Support
ComfyUI supports execution on GPUs with as low as 1GB VRAM. This is achieved through:
| VRAM Level | Strategy |
|---|---|
| 1GB+ | Full offloading with sequential layer execution |
| 4GB+ | Partial offloading with larger batch sizes |
| 8GB+ | Minimal offloading, models stay loaded |
| 16GB+ | Multiple models can stay in memory simultaneously |
Execution Optimization
The system implements intelligent execution optimization where:
- Only changed graph segments execute - If you submit the same graph twice, only the first execution runs
- Dependency tracking - Only parts of the graph that depend on changed nodes are re-executed
- Partial graph execution - Only graph segments with all correct inputs are executed
Sources: README.md
Model Loading Strategies
ComfyUI supports multiple model formats and loading strategies:
Supported Model Formats
| Format | Description | Safety |
|---|---|---|
.safetensors | Safe tensor format, recommended | ✅ Safe |
.ckpt | Checkpoint files | ⚠️ Standard |
.pt / .pth | PyTorch state dicts | ⚠️ Legacy |
Memory-Efficient Loading
The system implements safe loading for all model formats, preventing arbitrary code execution from malicious model files.
GPU Memory Options
Command Line Options
ComfyUI provides several command-line options for memory management:
# CPU-only execution (slowest, works without GPU)
python main.py --cpu
# Force specific GPU device
python main.py --device cuda:0
Preview Method Configuration
For latent preview generation, ComfyUI supports different preview methods that vary in memory usage:
| Method | Quality | Memory Usage | Description |
|---|---|---|---|
auto | Low | Minimal | Default fast latent preview |
taesd | High | Low | TAESD decoder for high-quality previews |
To enable high-quality previews:
# Download TAESD decoder files to models/vae_approx/
# Then launch with:
python main.py --preview-method taesd
Sources: README.md
Memory Management Classes
Based on the module structure, the memory management system consists of several key components:
classDiagram
class MemoryManager {
+manage_vram()
+offload_model()
+load_model()
}
class ModelManager {
+register_model()
+get_model()
+unload_model()
}
class PinnedMemory {
+allocate_pinned()
+transfer_to_device()
+free_pinned()
}
class PixelSpaceConverter {
+to_latent()
+to_pixel()
+convert_tensor()
}Module Responsibilities
| Module | Purpose |
|---|---|
memory_management.py | Core VRAM management and model placement logic |
model_management.py | Model lifecycle, registration, and caching |
pinned_memory.py | Pinned memory allocation for efficient CPU-GPU transfers |
pixel_space_convert.py | Conversion between pixel and latent image spaces |
Execution Flow with Memory Management
sequenceDiagram
participant User
participant Workflow
participant MemoryManager
participant ModelCache
participant GPU
participant SystemRAM
User->>Workflow: Submit Workflow
Workflow->>MemoryManager: Request Model
MemoryManager->>ModelCache: Check Cache
alt Model in Cache
ModelCache-->>MemoryManager: Return Model Ref
else Model Not Cached
MemoryManager->>GPU: Check VRAM
alt Sufficient VRAM
GPU-->>MemoryManager: OK
MemoryManager->>GPU: Load Model
else Insufficient VRAM
MemoryManager->>SystemRAM: Offload Parts
MemoryManager->>GPU: Load Partial Model
end
end
MemoryManager-->>Workflow: Model Ready
Workflow->>GPU: Execute Nodes
GPU-->>Workflow: ResultsBest Practices
- Close unused workflows - Free memory for new models
- Use
.safetensorsformat - Safer and often faster loading - Batch similar operations - Reduces model loading/unloading cycles
- Monitor VRAM usage - Use system tools to track memory consumption
Configuration Files
ComfyUI supports model path configuration through extra_model_paths.yaml:
# Example extra_model_paths.yaml
models_dir: /path/to/models
checkpoints:
- /custom/checkpoint/path
This allows sharing model directories with other Stable Diffusion installations, reducing duplicate storage.
Related Documentation
- GPU Requirements - Hardware recommendations
- Model Installation - Setting up models
- Performance Tuning - Optimization tips
Sources: [README.md]()
Doramagic Pitfall Log
Source-linked risks stay visible on the manual page so the preview does not read like a recommendation.
The project should not be treated as fully validated until this signal is reviewed.
The project should not be treated as fully validated until this signal is reviewed.
Users cannot judge support quality until recent activity, releases, and issue response are checked.
The project may affect permissions, credentials, data exposure, or host boundaries.
Doramagic Pitfall Log
Doramagic extracted 7 source-linked risk signals. Review them before installing or handing real data to the project.
1. Project risk: Project risk needs validation
- Severity: medium
- Finding: Project risk is backed by a source signal: Project risk needs validation. Treat it as a review item until the current version is checked.
- User impact: The project should not be treated as fully validated until this signal is reviewed.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: identity.distribution | github_repo:589831718 | https://github.com/Comfy-Org/ComfyUI | repo=comfyui; install=comfy-cli
2. Capability assumption: README/documentation is current enough for a first validation pass.
- Severity: medium
- Finding: README/documentation is current enough for a first validation pass.
- User impact: The project should not be treated as fully validated until this signal is reviewed.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: capability.assumptions | github_repo:589831718 | https://github.com/Comfy-Org/ComfyUI | README/documentation is current enough for a first validation pass.
3. Maintenance risk: Maintainer activity is unknown
- Severity: medium
- Finding: Maintenance risk is backed by a source signal: Maintainer activity is unknown. Treat it as a review item until the current version is checked.
- User impact: Users cannot judge support quality until recent activity, releases, and issue response are checked.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: evidence.maintainer_signals | github_repo:589831718 | https://github.com/Comfy-Org/ComfyUI | last_activity_observed missing
4. Security or permission risk: no_demo
- Severity: medium
- Finding: no_demo
- User impact: The project may affect permissions, credentials, data exposure, or host boundaries.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: downstream_validation.risk_items | github_repo:589831718 | https://github.com/Comfy-Org/ComfyUI | no_demo; severity=medium
5. Security or permission risk: no_demo
- Severity: medium
- Finding: no_demo
- User impact: The project may affect permissions, credentials, data exposure, or host boundaries.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: risks.scoring_risks | github_repo:589831718 | https://github.com/Comfy-Org/ComfyUI | no_demo; severity=medium
6. Maintenance risk: issue_or_pr_quality=unknown
- Severity: low
- Finding: issue_or_pr_quality=unknown。
- User impact: Users cannot judge support quality until recent activity, releases, and issue response are checked.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: evidence.maintainer_signals | github_repo:589831718 | https://github.com/Comfy-Org/ComfyUI | issue_or_pr_quality=unknown
7. Maintenance risk: release_recency=unknown
- Severity: low
- Finding: release_recency=unknown。
- User impact: Users cannot judge support quality until recent activity, releases, and issue response are checked.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: evidence.maintainer_signals | github_repo:589831718 | https://github.com/Comfy-Org/ComfyUI | release_recency=unknown
Source: Doramagic discovery, validation, and Project Pack records
Community Discussion Evidence
These external discussion links are review inputs, not standalone proof that the project is production-ready.
Count of project-level external discussion links exposed on this manual page.
Open the linked issues or discussions before treating the pack as ready for your environment.
Community Discussion Evidence
Doramagic exposes project-level community discussion separately from official documentation. Review these links before using ComfyUI with real data or production workflows.
- lora key not loaded anima - github / github_issue
- New Memory Managment is a Disaster for me. OOM on my Lora Trainer WF whe - github / github_issue
- Launch args to run LTX? - github / github_issue
- Missing "CosmosImageToVideoLatent" node - github / github_issue
- Please remove from Manager: ComfyUI-WhisperXX, it nukes the install. - github / github_issue
- Segmentation fault - github / github_issue
- impossible to drag and drop the workflow to the comfyui web interface. - github / github_issue
- RuntimeError: Tensors must have same number of dimensions: got 4 and 3 - github / github_issue
- v0.20.1 - github / github_release
- v0.19.3 - github / github_release
- v0.19.2 - github / github_release
- v0.19.1 - github / github_release
Source: Project Pack community evidence and pitfall evidence