# https://github.com/Comfy-Org/ComfyUI 项目说明书

生成时间：2026-05-15 19:52:25 UTC

## 目录

- [Introduction to ComfyUI](#page-introduction)
- [Installation Guide](#page-installation)
- [System Architecture](#page-architecture)
- [Server System](#page-server-system)
- [Execution Engine](#page-execution-engine)
- [Graph Management](#page-graph-management)
- [Model Loading and Detection](#page-model-loading)
- [Diffusion Models](#page-diffusion-models)
- [Text Processing and Encoders](#page-text-processing)
- [Memory Management](#page-memory-management)

<a id='page-introduction'></a>

## Introduction to ComfyUI

### 相关页面

相关主题：[Installation Guide](#page-installation), [System Architecture](#page-architecture)

<details>
<summary>相关源码文件</summary>

以下源码文件用于生成本页说明：

- [README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)
- [comfy/comfy_types/README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/comfy/comfy_types/README.md)
- [app/user_manager.py](https://github.com/Comfy-Org/ComfyUI/blob/main/app/user_manager.py)
- [app/model_manager.py](https://github.com/Comfy-Org/ComfyUI/blob/main/app/model_manager.py)
- [app/frontend_management.py](https://github.com/Comfy-Org/ComfyUI/blob/main/app/frontend_management.py)
- [api_server/routes/internal/README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/api_server/routes/internal/README.md)
</details>

# Introduction to ComfyUI

## Overview

ComfyUI is a powerful, modular AI creation engine designed for visual professionals who demand precise control over every model, parameter, and output. It provides a node graph-based interface that enables users to generate images, videos, 3D models, audio, and other AI-driven content with granular control over the entire generation pipeline.

资料来源：[README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

### Key Characteristics

| Characteristic | Description |
|----------------|-------------|
| **Type** | AI Generation Engine |
| **Interface** | Node Graph / Visual Programming |
| **License** | Open Source |
| **Platforms** | Windows, Linux, macOS, Cloud |
| **GPU Support** | NVIDIA, AMD (ROCm), Intel, Apple Silicon, Ascend, Iluvatar |

ComfyUI natively supports the latest open-source state-of-the-art models and provides API nodes for accessing closed-source models such as Seedance, Hunyuan3D, and others through the online Comfy API.

资料来源：[README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

## Core Features

### Model Support

ComfyUI provides extensive support for various AI model types:

| Model Category | Examples | Documentation Link |
|----------------|----------|---------------------|
| **Stable Diffusion** | SD 1.x, SD 2.x, SDXL, SD 3.x | [Examples](https://comfyanonymous.github.io/ComfyUI_examples/) |
| **ControlNet/T2I-Adapter** | Various preprocessors | [ControlNet Guide](https://comfyanonymous.github.io/ComfyUI_examples/controlnet/) |
| **LoRA/LyCORIS** | Regular, locon, loha variants | [LoRA Guide](https://comfyanonymous.github.io/ComfyUI_examples/lora/) |
| **Upscaling Models** | ESRGAN, SwinIR, Swin2SR | [Upscale Guide](https://comfyanonymous.github.io/ComfyUI_examples/upscale_models/) |
| **Latent Models** | LCM models and Loras | [LCM Guide](https://comfyanonymous.github.io/ComfyUI_examples/lcm/) |

资料来源：[README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

### Advanced Capabilities

- **Textual Inversion & Hypernetworks**: Advanced embedding techniques for custom styling
- **Area Composition**: Multi-region generation with precise control
- **Inpainting**: Both regular and inpainting-specific models supported
- **Model Merging**: Combine multiple models for unique outputs
- **Latent Previews**: Real-time preview with TAESD for high-quality previews
- **Workflow Export**: Save/load workflows as JSON, embed in PNG/WebP/FLAC metadata
- **Offline Operation**: Core functionality works completely offline

资料来源：[README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

## System Architecture

### High-Level Architecture

```mermaid
graph TD
    subgraph "Frontend Layer"
        UI[User Interface]
        WS[WebSocket Handler]
    end
    
    subgraph "API Layer"
        REST[REST API Routes]
        INT[Internal Routes]
    end
    
    subgraph "Core Execution Engine"
        SG[Scheduling Graph]
        EX[Execution Engine]
        NODE[Node Registry]
    end
    
    subgraph "Model Management"
        MM[Model Manager]
        LM[Loader Manager]
    end
    
    subgraph "Backend Services"
        UM[User Manager]
        FM[Frontend Manager]
    end
    
    UI <--> WS
    WS <--> REST
    REST <--> INT
    REST <--> SG
    SG <--> EX
    EX <--> NODE
    MM <--> LM
    UM <--> FM
    
    style UI fill:#e1f5fe
    style EX fill:#fff3e0
    style MM fill:#e8f5e9
```

### Node Type System

ComfyUI uses a typed node system for type-safe workflow construction. The `comfy_types` module provides abstract base classes and type hints:

```mermaid
classDiagram
    class ComfyNodeABC {
        <<abstract>>
        +INPUT_TYPES() InputTypeDict
        +FUNCTION() str
        +OUTPUT_NODE() bool
        +CATEGORY() str
        +RETURN_TYPES() tuple
    }
    
    class CheckLazyMixin {
        <<mixin>>
    }
    
    class IO {
        <<enum>>
        +ANY: "*"
        +NUMBER: "FLOAT,INT"
        +PRIMITIVE: "STRING,FLOAT,INT,BOOLEAN"
    }
    
    ComfyNodeABC <-- CheckLazyMixin
    ComfyNodeABC ..> IO : uses
```

资料来源：[comfy/comfy_types/README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/comfy/comfy_types/README.md)

### Execution Model

ComfyUI employs a smart execution model that optimizes workflow processing:

```mermaid
graph LR
    A[Submit Workflow] --> B{Changed?}
    B -->|First Run| C[Execute All Valid Paths]
    B -->|Unchanged| D[Skip Execution]
    B -->|Partial Change| E[Execute Changed + Dependencies]
    C --> F[Output Results]
    E --> F
    D --> F
```

**Execution Rules:**
- Only parts of the graph with all correct inputs will be executed
- Only parts that change between executions are re-run
- Submitting the same graph twice executes only the first instance

资料来源：[README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

## Installation

### Supported Platforms

| Platform | GPU Options | Installation Type |
|----------|-------------|-------------------|
| **Windows** | NVIDIA, AMD, Intel, CPU | Portable Package, Manual Install |
| **Linux** | NVIDIA, AMD (ROCm), Intel, CPU | Manual Install |
| **macOS** | Apple Silicon (M1/M2), CPU | Manual Install |
| **Cloud** | Comfy Cloud | Desktop Application |

资料来源：[README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

### Quick Start Commands

```bash
# Windows/Linux Manual Installation
pip install -r requirements.txt
python main.py

# NVIDIA GPU (Stable)
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu130

# NVIDIA GPU (Nightly)
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu132

# AMD GPU (ROCm)
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm6.1

# Apple Silicon
# Install PyTorch nightly per Apple Developer Guide
pip install -r requirements.txt
```

### ComfyUI-Manager Setup

ComfyUI-Manager provides extension management capabilities:

```bash
# Install dependencies
pip install -r manager_requirements.txt

# Enable with flags
python main.py --enable-manager
```

| Manager Flag | Description |
|--------------|-------------|
| `--enable-manager` | Enable ComfyUI-Manager |
| `--enable-manager-legacy-ui` | Use legacy manager UI |
| `--disable-manager-ui` | Keep background features only |

资料来源：[README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

## User Interface

### Keyboard Shortcuts

| Shortcut | Action |
|----------|--------|
| `Ctrl+Z` / `Ctrl+Y` | Undo/Redo |
| `Ctrl+S` | Save workflow |
| `Ctrl+O` | Load workflow |
| `Ctrl+A` | Select all nodes |
| `Alt+C` | Collapse/uncollapse selected |
| `Ctrl+M` | Mute/unmute selected |
| `Ctrl+B` | Bypass selected (reconnect wires) |
| `Delete/Backspace` | Delete selected nodes |
| `Space` + Drag | Pan canvas |
| `Ctrl+Click` / `Shift+Click` | Add to selection |
| `Ctrl+C` / `Ctrl+V` | Copy/paste nodes |
| `Ctrl+Shift+V` | Paste with connections |
| `Shift+Drag` | Move multiple nodes |
| `Ctrl+D` | Load default graph |
| `Alt++` / `Alt+-` | Zoom in/out |
| `P` | Pin/unpin nodes |
| `Ctrl+G` | Group selected |
| `Double-Click` | Open node search palette |

资料来源：[README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

### Preview Methods

ComfyUI supports multiple preview rendering methods:

| Method | Quality | Performance | Setup |
|--------|---------|-------------|-------|
| `auto` | Variable | Variable | Default |
| `taesd` | High | Fast | Download TAESD decoder models |

To enable high-quality previews with TAESD:

1. Download decoder files to `models/vae_approx` folder:
   - `taesd_decoder.pth`
   - `taesdxl_decoder.pth`
   - `taesd3_decoder.pth`
   - `taef1_decoder.pth`

2. Launch with preview flag:
   ```bash
   python main.py --preview-method taesd
   ```

资料来源：[README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

## API and Integration

### API Structure

ComfyUI provides a comprehensive REST API for external integrations:

```mermaid
graph TD
    EXT[External Application] -->|HTTP/REST| API[API Server]
    API -->|v2/userdata| UM[User Data Management]
    API -->|v2/modelinfo| MM[Model Info]
    API -->|v2/history| H[Execution History]
    EXT -->|WebSocket| WS[WebSocket Connection]
    WS -->|Real-time| STATUS[Execution Status]
```

### Internal Routes

All routes under `/internal` are designated for internal ComfyUI use only. These routes may change at any time without notice and are not intended for external application use.

资料来源：[api_server/routes/internal/README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/api_server/routes/internal/README.md)

### User Data API

The user data management system provides secure file operations:

| Endpoint | Method | Description |
|----------|--------|-------------|
| `/v2/userdata` | GET | List directory contents |
| `/v2/userdata/{path}` | POST | Upload file |
| `/v2/userdata/{file}` | DELETE | Delete file |
| `/v2/userdata/{file}/move/{dest}` | POST | Move/rename file |

**Query Parameters for Listing:**
- `path`: Relative path within user's data directory
- `recurse`: Enable recursive directory listing
- `full_info`: Return detailed file information
- `split`: Return path as array split by `/`

资料来源：[app/user_manager.py](https://github.com/Comfy-Org/ComfyUI/blob/main/app/user_manager.py)

### Model Discovery

The model manager provides intelligent model discovery with metadata extraction:

```mermaid
graph TD
    A[Model Path] --> B{Extension Check}
    B -->|.safetensors| C[Extract Metadata]
    B -->|.preview| D[Add Preview Image]
    B -->|Other| E[Standard Add]
    C --> F[Parse ssmd_cover_images]
    D --> R[Result List]
    E --> R
    F --> R
```

The system extracts preview images embedded in SafeTensors metadata under the `ssmd_cover_images` key.

资料来源：[app/model_manager.py](https://github.com/Comfy-Org/ComfyUI/blob/main/app/model_manager.py)

## Frontend Management

### Version Control

ComfyUI supports flexible frontend version management:

```mermaid
graph LR
    A[Default Frontend] --> B[Specific Version]
    A --> C[Latest/Daily]
    A --> D[Legacy Frontend]
    
    B -.->|v1.2.2| E[Stable]
    C -.->|daily| F[Cutting Edge]
    D -.->|legacy| G[Compatibility]
```

| Version String | Description |
|----------------|-------------|
| `Comfy-Org/ComfyUI_frontend@v1.2.2` | Specific stable version |
| `Comfy-Org/ComfyUI_frontend@latest` | Latest release |
| `Comfy-Org/ComfyUI_frontend@prerelease` | Pre-release build |

**Version Pattern:**
```
^([a-zA-Z0-9][a-zA-Z0-9-]{0,38})/([a-zA-Z0-9_.-]+)@(v?\d+\.\d+\.\d+[-._a-zA-Z0-9]*|latest|prerelease)$
```

资料来源：[app/frontend_management.py](https://github.com/Comfy-Org/ComfyUI/blob/main/app/frontend_management.py)

### Custom Frontends

Frontends are stored in a configurable directory structure:

```
CUSTOM_FRONTENDS_ROOT/
├── Comfy-Org_ComfyUI_frontend/
│   ├── v1.2.2/
│   ├── v1.3.0/
│   └── latest/
└── custom_provider_custom_frontend/
    └── v2.0.0/
```

The system supports embedding custom documentation and workflow templates through separate pip packages (`comfyui-embedded-docs`, `comfyui-workflow-templates`).

资料来源：[app/frontend_management.py](https://github.com/Comfy-Org/ComfyUI/blob/main/app/frontend_management.py)

## Security Features

### TLS/SSL Support

ComfyUI supports HTTPS for secure connections:

```bash
# Generate self-signed certificate
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem \
    -sha256 -days 3650 -nodes \
    -subj "/C=XX/ST=StateName/L=CityName/O=CompanyName/OU=CompanySectionName/CN=CommonNameOrHostname"

# Launch with TLS
python main.py --tls-keyfile key.pem --tls-certfile cert.pem
```

> Note: Self-signed certificates are not appropriate for shared or production environments.

资料来源：[README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

### Manager Security

The `--disable-manager-ui` flag allows keeping security checks and scheduled installation completion while disabling the manager UI and endpoints.

资料来源：[README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

## Release Process

ComfyUI follows a structured release cycle:

```mermaid
graph TD
    A[Commit to Repository] --> B{Which Branch?}
    B -->|Master| C[Weekly Release Candidate]
    B -->|Stable Tag| D[Backport Fixes]
    C --> E[Major Version v0.X.Y]
    D --> F[Patch Version v0.4.X]
    
    E -.->|~2 weeks| G[Next Major]
    F -.->|as needed| H[Stable Update]
```

| Release Type | Frequency | Target |
|---------------|------------|--------|
| Major Version | ~2 weeks | Monday (variable) |
| Patch Version | As needed | Stable branch backports |
| Nightly Commits | Ongoing | Master branch (unstable) |

> Warning: Commits outside stable release tags may be very unstable and break custom nodes.

资料来源：[README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

## See Also

- [Examples Page](https://comfyanonymous.github.io/ComfyUI_examples/) - Workflow examples
- [ComfyUI-Manager](https://github.com/Comfy-Org/ComfyUI-Manager) - Custom node management
- [Comfy Cloud](https://www.comfy.org/cloud) - Official cloud hosting
- [Comfy API Documentation](https://docs.comfy.org/tutorials/api-nodes/overview) - API nodes guide
- [GPU Recommendations](https://github.com/comfyanonymous/ComfyUI/wiki/Which-GPU-should-I-buy-for-ComfyUI) - Hardware guide

---

<a id='page-installation'></a>

## Installation Guide

### 相关页面

相关主题：[Introduction to ComfyUI](#page-introduction)

<details>
<summary>Relevant Source Files</summary>

以下源码文件用于生成本页说明：

- [README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)
- [requirements.txt](https://github.com/Comfy-Org/ComfyUI/blob/main/requirements.txt)
- [extra_model_paths.yaml.example](https://github.com/Comfy-Org/ComfyUI/blob/main/extra_model_paths.yaml.example)
- [manager_requirements.txt](https://github.com/Comfy-Org/ComfyUI/blob/main/manager_requirements.txt)
- [app/user_manager.py](https://github.com/Comfy-Org/ComfyUI/blob/main/app/user_manager.py)
- [app/frontend_management.py](https://github.com/Comfy-Org/ComfyUI/blob/main/app/frontend_management.py)
</details>

# Installation Guide

## Overview

This guide covers all supported methods for installing ComfyUI, including local installations on Windows, Linux, and macOS, as well as platform-specific considerations for NVIDIA, AMD, Intel, and Apple Silicon GPUs. ComfyUI is designed to be modular and works fully offline—the core will never download anything unless explicitly requested by the user.

资料来源：[README.md:1-50]()

## Installation Methods Overview

ComfyUI supports multiple installation approaches to accommodate different user needs and technical expertise levels.

```mermaid
graph TD
    A[ComfyUI Installation] --> B[Desktop Application]
    A --> C[Windows Portable Package]
    A --> D[Manual Installation]
    
    D --> E[Windows]
    D --> F[Linux]
    D --> G[macOS]
    
    E --> H[NVIDIA GPU]
    E --> I[AMD GPU]
    E --> J[Intel GPU]
    
    F --> K[NVIDIA GPU]
    F --> L[AMD ROCm]
    F --> M[Intel XPU]
    
    G --> N[Apple Silicon M1/M2]
```

## Prerequisites

### System Requirements

| Component | Minimum | Recommended |
|-----------|---------|-------------|
| GPU VRAM | 4GB | 8GB+ |
| RAM | 8GB | 16GB+ |
| Disk Space | 10GB | 20GB+ |
| OS | Windows 10, Linux, macOS | Windows 11, Latest Linux/macOS |

### GPU Support Matrix

| GPU Vendor | Support Level | Backend |
|------------|---------------|---------|
| NVIDIA | Full | CUDA (cu130/cu132) |
| AMD | Full (ROCm) | ROCm |
| Intel | Full (XPU) | oneAPI |
| Apple Silicon | Full | Metal/MPS |

资料来源：[README.md:200-280]()

## PyTorch Installation

PyTorch is the core dependency required for ComfyUI. The installation command varies by hardware platform.

### NVIDIA GPUs

For stable PyTorch with CUDA support:

```bash
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu130
```

For nightly builds with potential performance improvements:

```bash
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu132
```

资料来源：[README.md:180-195]()

### AMD GPUs (ROCm)

For AMD GPUs using ROCm, install the ROCm-compatible PyTorch build:

```bash
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm
```

For experimental memory-efficient attention on recent PyTorch with AMD GPUs:

```bash
TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL=1 python main.py --use-pytorch-cross-attention
```

For non-officially supported AMD cards, use environment variable overrides:

| GPU Series | Command |
|------------|---------|
| AMD 6700, 6600 (RDNA2) | `HSA_OVERRIDE_GFX_VERSION=10.3.0 python main.py` |
| AMD 7600 (RDNA3) | `HSA_OVERRIDE_GFX_VERSION=11.0.0 python main.py` |

Additional performance tuning options:

```bash
PYTORCH_TUNABLEOP_ENABLED=1 python main.py
```

资料来源：[README.md:220-260]()

### Intel GPUs (XPU)

For Intel discrete GPUs and APUs using the XPU backend:

```bash
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/xpu
```

For nightly builds with potential improvements:

```bash
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/xpu
```

资料来源：[README.md:160-178]()

### Apple Silicon (M1/M2)

1. Install the latest PyTorch nightly following Apple's [Accelerated PyTorch training on Mac](https://developer.apple.com/metal/pytorch/) developer guide.
2. Follow the manual installation instructions for your operating system.
3. Install ComfyUI dependencies as specified in the Dependencies section.

资料来源：[README.md:290-310]()

### Troubleshooting PyTorch

If you encounter the error "Torch not compiled with CUDA enabled":

```bash
pip uninstall torch
```

Then reinstall using the appropriate command for your hardware from the sections above.

资料来源：[README.md:196-199]()

## Dependencies Installation

After installing PyTorch, install the core ComfyUI dependencies:

```bash
pip install -r requirements.txt
```

This installs all required Python packages for ComfyUI to function properly. After this step, ComfyUI should be ready to run.

资料来源：[README.md:286-288]()

## Windows Portable Package

For Windows users seeking a portable, self-contained installation:

1. Download the portable standalone build from the [releases page](https://github.com/comfyanonymous/ComfyUI/releases).
2. Extract the archive to your desired location.
3. Run `python main.py` or the provided executable.

This package includes everything needed to run ComfyUI on NVIDIA GPUs or in CPU-only mode.

资料来源：[README.md:95-110]()

## Manual Installation

### Windows and Linux

```mermaid
graph LR
    A[Download/Clone Repository] --> B[Install PyTorch]
    B --> C[Install Dependencies]
    C --> D[Configure Model Paths]
    D --> E[Launch ComfyUI]
```

#### Step 1: Clone or Download the Repository

```bash
git clone https://github.com/Comfy-Org/ComfyUI.git
cd ComfyUI
```

#### Step 2: Install PyTorch

Follow the PyTorch installation instructions for your GPU in the PyTorch Installation section above.

#### Step 3: Install Dependencies

```bash
pip install -r requirements.txt
```

#### Step 4: Launch

```bash
python main.py
```

资料来源：[README.md:280-295]()

### Model Path Configuration

ComfyUI supports an optional configuration file to set custom search paths for models, useful if you have models stored in a different location or shared across multiple installations.

Copy the example configuration:

```bash
cp extra_model_paths.yaml.example extra_model_paths.yaml
```

Edit `extra_model_paths.yaml` to specify your model directories:

```yaml
# Example extra_model_paths.yaml
models:
  checkpoints: /path/to/your/checkpoints
  loras: /path/to/your/loras
  vae: /path/to/your/vae
```

资料来源：[extra_model_paths.yaml.example](https://github.com/Comfy-Org/ComfyUI/blob/main/extra_model_paths.yaml.example)()

## ComfyUI-Manager

ComfyUI-Manager is an extension that simplifies installation, updating, and management of custom nodes.

### Installation

1. Navigate to your ComfyUI installation directory
2. Clone the ComfyUI-Manager repository into the `custom_nodes` folder:

```bash
cd custom_nodes
git clone https://github.com/Comfy-Org/ComfyUI-Manager.git
```

3. Install manager dependencies:

```bash
pip install -r manager_requirements.txt
```

资料来源：[README.md:330-345]()

### Enabling ComfyUI-Manager

Start ComfyUI with the `--enable-manager` flag:

```bash
python main.py --enable-manager
```

### Manager Command Line Options

| Flag | Description |
|------|-------------|
| `--enable-manager` | Enable ComfyUI-Manager |
| `--enable-manager-legacy-ui` | Use the legacy manager UI (requires `--enable-manager`) |
| `--disable-manager-ui` | Disable manager UI while keeping background features (requires `--enable-manager`) |

资料来源：[README.md:346-365]()

## Desktop Application

For the easiest getting-started experience, download the official Desktop Application:

- Available for Windows and macOS
- Download from [comfy.org/download](https://www.comfy.org/download)

This method requires no technical configuration and is recommended for new users.

资料来源：[README.md:55-65]()

## Cloud Deployment

For users without local hardware, ComfyUI is available on Comfy Cloud:

- Official paid cloud version hosted at [comfy.org/cloud](https://www.comfy.org/cloud)
- No local hardware required
- Full ComfyUI functionality

资料来源：[README.md:66-70]()

## Advanced Configuration

### Multi-User Setup

For server deployments with multiple users, enable multi-user mode:

```bash
python main.py --multi-user
```

This enables server-side user profile storage instead of browser-based storage.

资料来源：[app/user_manager.py:25-35]()

### Frontend Version Management

ComfyUI ships its frontend as a separate pip package. To specify a frontend version:

```bash
python main.py --front-end-version Comfy-Org/ComfyUI_frontend@latest
```

For stable releases:

```bash
python main.py --front-end-version Comfy-Org/ComfyUI_frontend@v1.2.2
```

For legacy frontend:

```bash
python main.py --front-end-version Comfy-Org/ComfyUI_legacy_frontend@latest
```

资料来源：[app/frontend_management.py:40-75]()

### Additional Command Line Arguments

| Argument | Description |
|----------|-------------|
| `--preview-method auto` | Enable previews with automatic method selection |
| `--preview-method taesd` | Use TAESD for high-quality previews |
| `--tls-keyfile <file>` | Path to TLS private key |
| `--tls-certfile <file>` | Path to TLS certificate |
| `--use-pytorch-cross-attention` | Use PyTorch cross-attention implementation |
| `--disable-api-nodes` | Disable optional API nodes |

资料来源：[README.md:15-45]()

## Post-Installation Verification

After installation, verify your setup by:

1. Launching ComfyUI: `python main.py`
2. Opening the web interface (typically `http://localhost:8188`)
3. Running a simple workflow to confirm GPU acceleration is working

If previews are enabled, you should see latent preview updates during image generation, confirming the installation is functioning correctly.

资料来源：[README.md:10-20]()

## Common Issues

| Issue | Solution |
|-------|----------|
| "Torch not compiled with CUDA enabled" | Reinstall PyTorch with CUDA support |
| Import errors | Run `pip install -r requirements.txt` |
| Model not found | Configure `extra_model_paths.yaml` or check model paths |
| Manager installation fails | Ensure `manager_requirements.txt` dependencies are installed |

资料来源：[README.md:196-199]()

---

<a id='page-architecture'></a>

## System Architecture

### 相关页面

相关主题：[Server System](#page-server-system), [Execution Engine](#page-execution-engine), [Model Loading and Detection](#page-model-loading)

<details>
<summary>相关源码文件</summary>

以下源码文件用于生成本页说明：

- [README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)
- [comfy/comfy_types/README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/comfy/comfy_types/README.md)
- [app/user_manager.py](https://github.com/Comfy-Org/ComfyUI/blob/main/app/user_manager.py)
- [app/model_manager.py](https://github.com/Comfy-Org/ComfyUI/blob/main/app/model_manager.py)
- [app/frontend_management.py](https://github.com/Comfy-Org/ComfyUI/blob/main/app/frontend_management.py)
</details>

# System Architecture

## Overview

ComfyUI is a modular AI creation engine designed with a node-graph architecture that enables complex workflow orchestration for generative AI models. The system architecture follows a client-server model where the backend provides REST API endpoints for workflow execution, model management, and user administration, while the frontend communicates via WebSocket and HTTP protocols to render the visual node editor and manage execution state.

资料来源：[README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

## High-Level Architecture

```mermaid
graph TD
    subgraph Client
        Frontend["Web Frontend<br/>(React-based)"]
    end
    
    subgraph Server["ComfyUI Server"]
        API["REST API Routes"]
        WS["WebSocket Handler"]
        Execution["Execution Engine"]
        UserMgr["User Manager"]
        ModelMgr["Model Manager"]
    end
    
    subgraph Storage
        Models["Model Files"]
        Settings["User Settings"]
        Cache["File Cache"]
    end
    
    Frontend <-->|HTTP/WS| API
    Frontend <-->|WS| WS
    API <--> UserMgr
    API <--> ModelMgr
    Execution <--> Models
    UserMgr <--> Settings
    ModelMgr <--> Cache
```

## Core Components

### Execution Engine

The execution engine is the computational core of ComfyUI, responsible for processing node graphs in topological order. It implements intelligent caching where only parts of the graph that have changed between executions are re-processed.

**Key Characteristics:**
- Only parts of the graph that have an output with all the correct inputs will be executed
- Only parts of the graph that change from each execution to the next will be executed
- If the same graph is submitted twice, only the first will be executed
- If the last part of the graph changes, only that part and its dependents are re-executed

资料来源：[README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

### User Manager

The `UserManager` class handles multi-user support and user-specific settings storage.

```mermaid
classDiagram
    class UserManager {
        +settings: AppSettings
        +users: dict
        +__init__()
        +get_users_file(): str
    }
    
    class AppSettings {
        +__init__(user_manager)
        +get_default_user(): str
    }
```

**User Configuration:**

| Parameter | Description | Default |
|-----------|-------------|---------|
| `multi_user` | Enable multiple user profiles | `False` |
| User Directory | Location for user-specific data | `folder_paths.get_user_directory()` |

**Initialization Logic:**

```python
# Single-user mode (default)
self.users = {"default": "default"}

# Multi-user mode (with --multi-user flag)
if os.path.isfile(self.get_users_file()):
    with open(self.get_users_file()) as f:
        self.users = json.load(f)
```

资料来源：[app/user_manager.py:1-50](https://github.com/Comfy-Org/ComfyUI/blob/main/app/user_manager.py)

### Model Manager

The `ModelFileManager` class provides centralized model file discovery and caching.

```mermaid
graph LR
    A[Model Request] --> B[Cache Check]
    B -->|Hit| C[Return Cached]
    B -->|Miss| D[Scan Directories]
    D --> E[Build File List]
    E --> F[Cache Result]
    F --> C
```

**Cache Data Structure:**

| Field | Type | Description |
|-------|------|-------------|
| `key` | `str` | Cache identifier |
| `value` | `tuple[list[dict], dict[str, float], float]` | Models list, metadata, timestamp |

**Model Discovery Features:**
- Recursive directory scanning with glob patterns
- Safe file filtering by extension and content type
- Support for safetensors metadata extraction
- Preview image detection (`*.preview` files)

资料来源：[app/model_manager.py:1-80](https://github.com/Comfy-Org/ComfyUI/blob/main/app/model_manager.py)

### Frontend Management

The `FrontendManagement` class handles frontend version control and installation verification.

**Version Parsing Pattern:**

```
{provider}/{repo}@{version}
```

Example: `Comfy-Org/ComfyUI_frontend@v1.2.2`

**Validation Regex:**
```
^([a-zA-Z0-9][a-zA-Z0-9-]{0,38})/([a-zA-Z0-9_.-]+)@(v?\d+\.\d+\.\d+[-._a-zA-Z0-9]*|latest|prerelease)$
```

**Package Discovery:**

| Package Type | Purpose |
|--------------|---------|
| `comfyui-frontend-package` | Main frontend assets |
| `comfyui-workflow-templates` | Workflow template files |
| `comfyui-embedded-docs` | Embedded documentation |

资料来源：[app/frontend_management.py:1-100](https://github.com/Comfy-Org/ComfyUI/blob/main/app/frontend_management.py)

## API Routes Architecture

### REST Endpoints

```mermaid
graph TD
    R1["GET /v2/userdata"] --> UM[UserManager]
    R2["GET /experiment/models"] --> MM[ModelFileManager]
    R3["GET /experiment/models/{folder}"] --> MM
```

**File Listing Parameters:**

| Parameter | Type | Description |
|-----------|------|-------------|
| `path` | `str` | Relative path within data directory |
| `recurse` | `bool` | Enable recursive directory traversal |
| `full_info` | `bool` | Return full file metadata |
| `split` | `bool` | Return path as array (split by `/`) |

**Response Format:**

```python
class FileInfo(TypedDict):
    path: str      # Relative file path
    size: int      # File size in bytes
    modified: int  # Modification time (milliseconds)
    created: int   # Creation time (milliseconds)
```

资料来源：[app/user_manager.py:60-100](https://github.com/Comfy-Org/ComfyUI/blob/main/app/user_manager.py)

## Type System Architecture

ComfyUI implements a comprehensive type hinting system for node development.

```mermaid
classDiagram
    class ComfyNodeABC {
        <<abstract>>
        +INPUT_TYPES: InputTypeDict
    }
    
    class IO {
        <<enumeration>>
        ANY = "*"
        NUMBER = "FLOAT,INT"
        PRIMITIVE = "STRING,FLOAT,INT,BOOLEAN"
    }
    
    ComfyNodeABC --> IO
```

**Built-in IO Types:**

| Type | Value | Description |
|------|-------|-------------|
| `ANY` | `"*"` | Accepts any input type |
| `NUMBER` | `"FLOAT,INT"` | Numeric values |
| `PRIMITIVE` | `"STRING,FLOAT,INT,BOOLEAN"` | Basic data types |

资料来源：[comfy/comfy_types/README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/comfy/comfy_types/README.md)

## Configuration and CLI Arguments

### Command Line Options

| Flag | Description |
|------|-------------|
| `--enable-manager` | Enable ComfyUI-Manager |
| `--enable-manager-legacy-ui` | Use legacy manager UI |
| `--disable-manager-ui` | Disable manager UI (keep background features) |
| `--disable-api-nodes` | Disable optional API nodes |
| `--preview-method {auto,taesd}` | Preview generation method |
| `--front-end-version` | Specify frontend version |

资料来源：[README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

### Environment Variables

| Variable | Purpose | Example |
|----------|---------|---------|
| `HSA_OVERRIDE_GFX_VERSION` | AMD GPU compatibility | `10.3.0` for RDNA2 |
| `TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL` | ROCm memory optimization | `1` |
| `PYTORCH_TUNABLEOP_ENABLED` | PyTorch tuning | `1` |

资料来源：[README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

## Data Flow: Workflow Execution

```mermaid
sequenceDiagram
    participant Client
    participant API
    participant Execution
    participant Cache
    participant Models
    
    Client->>API: Submit Workflow Graph
    API->>Execution: Parse Graph
    Execution->>Cache: Check Node States
    Cache-->>Execution: Cached Results
    Execution->>Models: Load Required Models
    Models-->>Execution: Model Data
    Execution->>Execution: Topological Sort
    Execution->>Execution: Execute Changed Nodes
    Execution-->>API: Output Results
    API-->>Client: WebSocket Update
```

## Node Graph Structure

ComfyUI workflows are represented as directed acyclic graphs (DAGs) where:

- **Nodes** represent computational units (e.g., model loading, sampling, encoding)
- **Edges** represent data flow between nodes
- **Execution Order** is determined by topological sorting based on input dependencies

```mermaid
graph LR
    subgraph Inputs
        Model["Model Loader"]
        Clip["CLIP Text Encode"]
        Latent["Empty Latent"]
    end
    
    subgraph Process
        Sampler["KSampler"]
    end
    
    subgraph Outputs
        Decode["VAE Decode"]
        Image["Save Image"]
    end
    
    Model --> Sampler
    Clip --> Sampler
    Latent --> Sampler
    Sampler --> Decode
    Decode --> Image
```

## Release Process Architecture

ComfyUI maintains three interconnected repositories with different release cadences:

| Repository | Branch | Release Cycle | Purpose |
|------------|--------|---------------|---------|
| ComfyUI Core | master | ~2 weeks | Major stable releases |
| ComfyUI Core | tags | as needed | Patch fixes for stable |
| Frontend | various | weekly | UI updates |

**Versioning Scheme:**
- Major versions (e.g., v0.7.0) for significant releases
- Minor versions for master branch releases
- Patch versions for backported fixes

资料来源：[README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

## Security Considerations

### Multi-User Mode

When `--multi-user` is enabled:
- User settings are stored server-side instead of browser local storage
- Each user has isolated data directories
- User settings persist across sessions

### File Access Control

The `/v2/userdata` endpoint implements path validation:
- Prevents directory traversal attacks
- Validates paths are within user's data directory
- Returns appropriate HTTP status codes (400, 404) for invalid requests

资料来源：[app/user_manager.py:80-120](https://github.com/Comfy-Org/ComfyUI/blob/main/app/user_manager.py)

---

<a id='page-server-system'></a>

## Server System

### 相关页面

相关主题：[System Architecture](#page-architecture), [Execution Engine](#page-execution-engine)

<details>
<summary>Relevant Source Files</summary>

以下源码文件用于生成本页说明：

- [server.py](https://github.com/Comfy-Org/ComfyUI/blob/main/server.py)
- [protocol.py](https://github.com/Comfy-Org/ComfyUI/blob/main/protocol.py)
- [api_server/routes/internal/internal_routes.py](https://github.com/Comfy-Org/ComfyUI/blob/main/api_server/routes/internal/internal_routes.py)
- [api_server/services/terminal_service.py](https://github.com/Comfy-Org/ComfyUI/blob/main/api_server/services/terminal_service.py)
- [api_server/utils/file_operations.py](https://github.com/Comfy-Org/ComfyUI/blob/main/api_server/utils/file_operations.py)
</details>

# Server System

## Overview

The ComfyUI Server System is the core backend infrastructure responsible for handling client connections, executing workflows, managing files, and orchestrating the AI generation pipeline. Built on top of `aiohttp`, the server provides both REST API endpoints and WebSocket-based real-time communication for seamless interaction between the frontend interface and backend processing engines.

The server acts as the central hub that manages:
- **Client connections** via WebSocket protocol
- **Workflow execution** scheduling and queue management
- **File operations** for models, outputs, and user data
- **Frontend delivery** and management
- **User authentication and multi-user support**

资料来源：[server.py]() | [protocol.py]()

## Architecture Overview

```mermaid
graph TB
    subgraph "Client Layer"
        Frontend[Frontend UI]
        ExternalAPI[External API Clients]
    end

    subgraph "Server Core"
        WSS[WebSocket Server]
        REST[REST API Routes]
        Auth[Authentication Layer]
    end

    subgraph "Services Layer"
        Exec[Execution Engine]
        Terminal[Terminal Service]
        FileOps[File Operations]
        Queue[Queue Manager]
    end

    subgraph "Data Layer"
        Models[Model Manager]
        Users[User Manager]
        Settings[App Settings]
    end

    Frontend --> WSS
    ExternalAPI --> REST
    WSS --> Auth
    REST --> Auth
    Auth --> Exec
    Exec --> Queue
    Exec --> Terminal
    FileOps --> Models
    FileOps --> Users
    FileOps --> Settings
```

## Protocol Layer

### WebSocket Protocol

The ComfyUI server uses a custom WebSocket-based protocol for real-time communication between the client and server. This protocol enables:

- **Bidirectional messaging** - Both client and server can send messages independently
- **Execution events** - Real-time updates on workflow execution progress
- **Prompt submission** - Sending workflows for execution
- **History tracking** - Recording and retrieving execution history

资料来源：[protocol.py]()

### Message Types

| Message Type | Direction | Purpose |
|--------------|-----------|---------|
| `executing` | Server → Client | Notification when a node begins execution |
| `executed` | Server → Client | Notification when a node completes execution |
| `execution_error` | Server → Client | Reports errors during workflow execution |
| `progress` | Server → Client | Progress updates for long-running operations |
| `executing_node` | Server → Client | Identifies currently executing node |
| `prompt` | Client → Server | Submit workflow for execution |
| `interrupt` | Client → Server | Request to interrupt current execution |

## Server Core Components

### Main Server Entry Point

The `server.py` file contains the main server initialization and lifecycle management. Key responsibilities include:

- Initializing the aiohttp web application
- Registering routes and middleware
- Setting up WebSocket endpoints
- Managing server lifecycle (start, stop, restart)

```python
# Server initialization pattern
app = web.Application()
server = Server()
server.setup_routes(app)
web.run_app(app, host=host, port=port)
```

资料来源：[server.py]()

### API Routes Structure

The server organizes routes into logical namespaces:

| Route Namespace | Purpose |
|-----------------|---------|
| `/api` | Public REST API endpoints |
| `/internal` | Internal server-to-server communication |
| `/v2/userdata` | User data management endpoints |
| `/experiment` | Experimental features |

### Internal Routes

Internal routes under `/internal` are designated for ComfyUI's internal use only and may change without notice. These routes handle:

- System-level operations
- Queue management
- Execution state tracking
- Server configuration

资料来源：[api_server/routes/internal/internal_routes.py]()

## Services Layer

### Terminal Service

The Terminal Service manages pseudo-terminal functionality for executing external processes. This service is crucial for:

- Running Python scripts within workflows
- Executing system commands
- Managing subprocess lifecycle

The service provides:
- PTY (pseudo-terminal) allocation
- Stream multiplexing
- Process lifecycle management

资料来源：[api_server/services/terminal_service.py]()

### File Operations

The file operations module provides utilities for:

| Operation | Description |
|-----------|-------------|
| Directory listing | Recursive and non-recursive file traversal |
| File metadata | Size, creation time, modification time |
| Path validation | Security checks for path traversal |
| User data access | Isolated access to user-specific directories |

```python
# File info structure returned by file operations
class FileInfo(TypedDict):
    path: str      # Relative path from base directory
    size: int      # File size in bytes
    modified: int  # Modification timestamp (milliseconds)
    created: int   # Creation timestamp (milliseconds)
```

The `list_userdata_v2` endpoint provides structured access to user data directories with proper security constraints.

资料来源：[api_server/utils/file_operations.py]()

### Queue Manager

The queue manager handles workflow scheduling:

- **Priority queuing** - Higher priority prompts execute first
- **Execution caching** - Identical graphs skip re-execution
- **Partial execution** - Only changed portions of graphs execute

Execution behavior notes:
- Only parts of the graph with all correct inputs will be executed
- Only parts that change between executions are re-run
- Submitting the same graph twice results in only the first execution

## Data Management

### User Manager

The UserManager handles multi-user support and user settings:

- **User directory management** - Isolated storage per user
- **Settings persistence** - Server-side storage instead of browser localStorage
- **Multi-user mode** - Enabled via `--multi-user` CLI flag

| Setting | Description |
|---------|-------------|
| `multi_user` | CLI argument to enable multiple user profiles |
| `user_directory` | Base directory for user-specific data |
| `users_file` | JSON file storing user configurations |

User data is stored in the user directory with each user having isolated access to their own data.

资料来源：[app/user_manager.py]()

### Model Manager

The ModelFileManager provides:

- **Model discovery** - Listing models by type and folder
- **Metadata extraction** - Reading safetensors headers for preview images
- **Preview generation** - Supporting preview thumbnails for models

| Feature | Supported Formats |
|---------|-------------------|
| Preview Images | PNG, JPG, WebP |
| Model Metadata | safetensors headers |
| Preview Thumbnails | Base64-encoded in safetensors metadata |

The `/experiment/models` endpoint provides a structured listing of available model types and folders.

资料来源：[app/model_manager.py]()

### Frontend Management

Frontend management handles the web UI delivery:

- **Version management** - Supports specific versions, nightly builds, or stable releases
- **Custom frontends** - Allows loading frontends from external repositories
- **Embedded docs** - Integration with embedded documentation package

```bash
# Example: Using specific frontend version
--front-end-version Comfy-Org/ComfyUI_frontend@v1.2.2

# Using legacy frontend
--front-end-version Comfy-Org/ComfyUI_legacy_frontend@latest
```

| Frontend Provider | Description |
|-------------------|-------------|
| PyPI (stable) | Default stable releases |
| GitHub | Cutting-edge daily updates |
| Custom | Repository-specific versions |

资料来源：[app/frontend_management.py]()

## Security Model

### User Data Isolation

The server implements strict user data isolation:

- Each user has a dedicated data directory
- Path traversal attacks are prevented via `glob.escape()`
- User data endpoints validate paths against allowed directories
- Multi-user mode requires explicit CLI activation

### Internal Routes Protection

Routes under `/internal` are explicitly marked as:
- Not intended for external application use
- Subject to change without notice
- Internal ComfyUI functionality only

## Configuration

### CLI Arguments

| Argument | Description |
|----------|-------------|
| `--enable-manager` | Enable ComfyUI-Manager extension |
| `--enable-manager-legacy-ui` | Use legacy manager UI |
| `--disable-manager-ui` | Disable manager UI while keeping background features |
| `--multi-user` | Enable multiple user profiles |
| `--front-end-version` | Specify frontend version |
| `--preview-method` | Set preview generation method (auto, taesd) |
| `--tls-keyfile` | TLS private key file path |
| `--tls-certfile` | TLS certificate file path |

### Environment Variables

| Variable | Purpose |
|----------|---------|
| `TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL` | Enable experimental ROCm features |
| `PYTORCH_TUNABLEOP_ENABLED` | Enable PyTorch tuning for potential speed improvements |
| `HSA_OVERRIDE_GFX_VERSION` | Override AMD GPU architecture detection |

## Execution Flow

```mermaid
sequenceDiagram
    participant Client
    participant Server
    participant Queue
    participant Executor

    Client->>Server: WebSocket Connect
    Server->>Client: Connection Acknowledged

    Client->>Server: Submit Prompt (workflow)
    Server->>Queue: Add to execution queue
    Server->>Client: Queue position acknowledged

    Loop Execution
        Queue->>Executor: Dequeue next task
        Executor->>Executor: Execute node(s)
        Executor->>Server: Progress updates
        Server->>Client: Real-time execution events

        alt Node executes successfully
            Executor->>Server: Node completed
            Server->>Client: "executed" message
        else Execution error
            Executor->>Server: Error details
            Server->>Client: "execution_error" message
        end
    end

    Executor->>Server: All nodes complete
    Server->>Client: Execution complete
```

## Summary

The ComfyUI Server System provides a robust, event-driven architecture for AI workflow execution. Built on aiohttp, it combines:

- **WebSocket-based real-time communication** for interactive execution monitoring
- **RESTful API endpoints** for external integration
- **Service-oriented design** for modularity and maintainability
- **Strong security boundaries** through user isolation and path validation

The server seamlessly integrates with the frontend to deliver a responsive user experience while managing complex AI model execution pipelines in the background.

---

<a id='page-execution-engine'></a>

## Execution Engine

### 相关页面

相关主题：[Graph Management](#page-graph-management), [Server System](#page-server-system), [Memory Management](#page-memory-management)

<details>
<summary>Relevant Source Files</summary>

The following source files were used to generate this documentation:

- [README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md) - Contains high-level execution engine behavior notes
- [app/user_manager.py](https://github.com/Comfy-Org/ComfyUI/blob/main/app/user_manager.py) - Contains API routes and file processing logic
- [app/model_manager.py](https://github.com/Comfy-Org/ComfyUI/blob/main/app/model_manager.py) - Contains model file management and caching
- [app/frontend_management.py](https://github.com/Comfy-Org/ComfyUI/blob/main/app/frontend_management.py) - Contains frontend version management and installation logic
- [comfy/comfy_types/README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/comfy/comfy_types/README.md) - Contains type definitions for node development

</details>

# Execution Engine

The Execution Engine is the core component of ComfyUI responsible for processing node-based workflows. It analyzes the dependency graph, determines execution order, and runs only the nodes necessary to produce the requested outputs.

## Overview

ComfyUI uses a directed acyclic graph (DAG) model where each node represents an operation and edges represent data dependencies. The execution engine processes this graph efficiently by:

- Executing only nodes with all required inputs available
- Skipping unchanged portions of the graph on re-execution
- Caching intermediate results to avoid redundant computation

资料来源：[README.md:1-50]()

## Execution Model

### Lazy Evaluation Strategy

The execution engine employs lazy evaluation, meaning nodes are only executed when their outputs are actually needed by other nodes or requested by the user.

```mermaid
graph TD
    A[User Request] --> B{Output Cached?}
    B -->|Yes| C[Return Cached Result]
    B -->|No| D[Find All Dependent Nodes]
    D --> E[Check Input Availability]
    E --> F[Execute Required Nodes]
    F --> G[Cache Results]
    G --> C
```

### Incremental Execution

One of the most powerful features of the execution engine is its ability to perform incremental execution:

- If the same workflow is submitted twice, only the first execution runs
- If only part of the graph changes, only that part and its downstream dependencies are re-executed
- This dramatically improves performance for iterative workflows

> "Only parts of the graph that change from each execution to the next will be executed, if you submit the same graph twice only the first will be executed. If you change the last part of the graph only the part you changed and the part that depends on it will be executed."

资料来源：[README.md:1-50]()

## Node Execution

### Input Validation

Before any node executes, the engine validates that all required inputs are present and correctly typed. Nodes that cannot satisfy their input requirements are skipped from execution.

### Dependency Resolution

The execution engine uses topological sorting to determine the correct order of node execution, ensuring that all input dependencies are satisfied before a node runs.

## Caching System

ComfyUI implements a sophisticated caching mechanism to avoid redundant computation.

### Cache Structure

The `ModelFileManager` class manages caching with the following structure:

```python
self.cache: dict[str, tuple[list[dict], dict[str, float], float]] = {}
```

Each cache entry contains:
- A list of dictionaries with file information
- A dictionary mapping file paths to modification timestamps
- A float representing cache creation time

资料来源：[app/model_manager.py:1-50]()

### Cache Operations

| Operation | Method | Description |
|-----------|--------|-------------|
| Get Cache | `get_cache(key, default)` | Retrieves cached data by key |
| Set Cache | `set_cache(key, value)` | Stores data in cache |
| Clear Cache | `clear_cache()` | Removes all cached entries |

资料来源：[app/model_manager.py:1-50]()

## API Endpoints

The execution engine interacts with the following API endpoints for model and file management:

### Model Routes

| Endpoint | Method | Purpose |
|----------|--------|---------|
| `/experiment/models` | GET | List all available model folders |
| `/experiment/models/{folder}` | GET | List all models in a specific folder |

### File Routes

| Endpoint | Method | Purpose |
|----------|--------|---------|
| `/files` | GET | List files in a directory |
| `/v2/userdata` | GET | List user data directory contents |

The file listing endpoint supports query parameters:
- `path`: Relative path within the data directory
- `recurse`: Enable recursive directory traversal
- `full_info`: Return detailed file information
- `split`: Return path segments as array elements

资料来源：[app/user_manager.py:1-50]()
资料来源：[app/model_manager.py:50-100]()

## Node Type System

ComfyUI uses a typed node system defined in `comfy/comfy_types/`:

### Core Types

| Type | Description |
|------|-------------|
| `IO.ANY` | Accepts any input type ("*") |
| `IO.NUMBER` | Numeric values (FLOAT, INT) |
| `IO.PRIMITIVE` | Basic types (STRING, FLOAT, INT, BOOLEAN) |

### Base Class

The `ComfyNodeABC` abstract base class provides:
- Type hinting support
- Autocomplete for node developers
- Standardized `INPUT_TYPES` interface

资料来源：[comfy/comfy_types/README.md:1-50]()

## Workflow Processing

### File Operations

Workflows can be loaded from multiple formats:

- PNG files with embedded workflow data
- WebP images
- FLAC audio files
- JSON workflow files

Dragging a generated PNG onto the webpage automatically extracts the full workflow including seeds.

资料来源：[README.md:1-50]()

### Dynamic Prompts

The execution engine supports dynamic prompt syntax:

| Syntax | Description |
|--------|-------------|
| `(text:1.2)` | Increase emphasis (1.2x) |
| `(text:0.8)` | Decrease emphasis (0.8x) |
| `{wild\|card\|test}` | Random selection |
| `\\(` | Escape parentheses |
| `\\{` | Escape braces |

## Frontend Integration

The execution engine works with frontend version management to ensure compatibility:

### Version String Format

```
provider/repository@version
```

Example: `Comfy-Org/ComfyUI_frontend@1.2.2`

### Version Pattern

```
^([a-zA-Z0-9][a-zA-Z0-9-]{0,38})/([a-zA-Z0-9_.-]+)@(v?\d+\.\d+\.\d+[-._a-zA-Z0-9]*|latest|prerelease)$
```

资料来源：[app/frontend_management.py:1-50]()

## Performance Optimizations

### Graph Optimization

The execution engine optimizes performance through:

1. **Dependency Analysis**: Identifies minimum required nodes
2. **Caching**: Stores intermediate computation results
3. **Incremental Updates**: Skips unchanged graph portions
4. **Lazy Evaluation**: Only computes when outputs are needed

### Parallel Execution

While nodes within the same dependency level may have execution order constraints, the engine is designed to support parallel execution where possible.

## Error Handling

The execution engine provides graceful error handling:

- Invalid paths return appropriate HTTP status codes (400, 404)
- Missing requirements are logged with installation instructions
- The system can continue operating even if optional components are unavailable

资料来源：[app/frontend_management.py:1-50]()

---

<a id='page-graph-management'></a>

## Graph Management

### 相关页面

相关主题：[Execution Engine](#page-execution-engine)

<details>
<summary>相关源码文件</summary>

以下源码文件用于生成本页说明：

- [app/subgraph_manager.py](https://github.com/Comfy-Org/ComfyUI/blob/main/app/subgraph_manager.py)
- [app/node_replace_manager.py](https://github.com/Comfy-Org/ComfyUI/blob/main/app/node_replace_manager.py)
- [comfy/comfy_types/README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/comfy/comfy_types/README.md)
- [README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)
</details>

# Graph Management

## Overview

Graph Management is a core system in ComfyUI that handles the creation, execution, caching, and manipulation of node-based computational graphs. The system orchestrates how nodes are executed, how workflows are processed, and how subgraphs are managed across the application. ComfyUI's node graph interface enables users to experiment and create complex Stable Diffusion workflows without needing to code, making graph management essential for both the UI layer and the execution engine.

The graph management system encompasses several interconnected components: the execution engine that processes node graphs, subgraph management for reusable workflow components, node replacement for runtime optimizations, and type hinting infrastructure for node development. Only parts of the graph that have an output with all the correct inputs will be executed, and only parts that change from each execution to the next will be re-executed, significantly optimizing performance for iterative workflows.

## Core Architecture

```mermaid
graph TD
    A[User Workflow] --> B[Graph Execution Engine]
    B --> C[Node Execution]
    B --> D[Subgraph Manager]
    B --> E[Node Replace Manager]
    C --> F[Graph Utils]
    D --> G[Custom Node Subgraphs]
    D --> H[Blueprint Subgraphs]
    E --> I[Registered Replacements]
    F --> J[Graph Optimization]
```

## Subgraph Management

### Purpose and Scope

The Subgraph Manager handles the registration, loading, and lifecycle of reusable workflow components called subgraphs. Subgraphs are self-contained node definitions stored as JSON files that can be imported and used within larger workflows. This system enables code modularity and reuse, allowing custom node developers to package complex node arrangements as single, reusable units.

The manager supports two distinct sources of subgraphs:

| Source | Description | Path Location |
|--------|-------------|---------------|
| `custom_node` | Subgraphs bundled with custom node extensions | `<custom_node_dir>/subgraphs/<name>.json` |
| `templates` | Built-in workflow templates | `blueprints/` directory |

### Data Models

#### Source Enum

```python
class Source:
    custom_node = "custom_node"
    templates = "templates"
```

#### SubgraphEntry Structure

| Field | Type | Description |
|-------|------|-------------|
| `source` | `str` | Source identifier - custom_node or templates |
| `path` | `str` | Relative path of the subgraph file |
| `name` | `str` | Name of the subgraph file (without extension) |
| `info` | `CustomNodeSubgraphEntryInfo` | Additional metadata (node pack name for custom nodes) |
| `data` | `str` | Raw JSON content of the subgraph |

资料来源：[app/subgraph_manager.py:1-45]()

#### CustomNodeSubgraphEntryInfo

```python
class CustomNodeSubgraphEntryInfo(TypedDict):
    node_pack: str
    """Node pack name."""
```

### Caching Strategy

The Subgraph Manager implements a caching mechanism to avoid redundant filesystem operations:

```python
class SubgraphManager:
    def __init__(self):
        self.cached_custom_node_subgraphs: dict[SubgraphEntry] | None = None
        self.cached_blueprint_subgraphs: dict[SubgraphEntry] | None = None
```

The cache is invalidated when `force_reload=True` is passed to the retrieval methods, enabling refresh during custom node reload scenarios.

### Entry Generation

Each subgraph entry is assigned a unique identifier generated via SHA-256 hash:

```python
def _create_entry(self, file: str, source: str, node_pack: str) -> tuple[str, SubgraphEntry]:
    """Create a subgraph entry from a file path. Expects normalized path (forward slashes)."""
    entry_id = hashlib.sha256(f"{source}{file}".encode()).hexdigest()
    entry: SubgraphEntry = {
        "source": source,
        "name": os.path.splitext(os.path.basename(file))[0],
        "path": file,
        ...
    }
```

资料来源：[app/subgraph_manager.py:57-70]()

### REST API Endpoints

| Endpoint | Method | Description |
|----------|--------|-------------|
| `/global_subgraphs` | GET | Returns all subgraphs with optional data stripping |
| `/global_subgraphs/{id}` | GET | Returns a specific subgraph by its SHA-256 ID |

The `get_all_subgraphs` method merges results from both custom nodes and blueprints:

```python
async def get_all_subgraphs(self, loadedModules, force_reload=False):
    """Get all subgraphs from all sources (custom nodes and blueprints)."""
    custom_node_subgraphs = await self.get_custom_node_subgraphs(loadedModules, force_reload)
    blueprint_subgraphs = await self.get_blueprint_subgraphs(force_reload)
    return {**custom_node_subgraphs, **blueprint_subgraphs}
```

## Node Replacement Management

### Purpose

The Node Replace Manager registers runtime node substitutions that occur during graph execution. This system enables custom nodes to declare that certain node types should be replaced with alternative implementations, facilitating backward compatibility, optimization, and feature expansion without modifying existing workflows.

### Registration Interface

```python
class NodeReplaceManager:
    """Manages node replacement registrations."""

    def __init__(self):
        self._replacements: dict[str, list[NodeReplace]] = {}

    def register(self, node_replace: NodeReplace):
        """Register a node replacement mapping.

        Idempotent: if a replacement with the same (old_node_id, new_node_id)
        is already registered, the duplicate is ignored. This prevents stale
        entries from accumulating when custom nodes are reloaded in the same
        process (e.g. via ComfyUI-Manager).
        """
```

资料来源：[app/node_replace_manager.py:25-40]()

### Idempotent Registration

The registration process is designed to be idempotent, preventing duplicate entries when custom nodes are reloaded:

```python
existing = self._replacements.setdefault(node_replace.old_node_id, [])
for entry in existing:
    if entry.new_node_id == node_replace.new_node_id:
        logging.debug(
            "Node replacement %s -> %s already registered, ignoring duplicate.",
            ...
        )
```

This design prevents stale entries from accumulating during custom node reloads triggered by ComfyUI-Manager.

## Node Type System

### IO Types

ComfyUI provides a standardized type system through the `IO` enum for node input/output definitions:

| Type | Value | Description |
|------|-------|-------------|
| `ANY` | `"*"` | Accepts any type |
| `NUMBER` | `"FLOAT,INT"` | Numeric values |
| `PRIMITIVE` | `"STRING,FLOAT,INT,BOOLEAN"` | Basic data types |

资料来源：[comfy/comfy_types/README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/comfy/comfy_types/README.md)

### ComfyNodeABC Base Class

The abstract base class provides type-hinting and autocomplete support for node developers:

```python
class ExampleNode(ComfyNodeABC):
    @classmethod
    def INPUT_TYPES(s) -> InputTypeDict:
        return {"required": {}}
```

## Graph Execution Model

### Execution Optimization

ComfyUI's graph execution follows specific rules that optimize performance:

1. **Complete Input Requirement**: Only parts of the graph that have an output with all the correct inputs will be executed.

2. **Incremental Execution**: Only parts of the graph that change from each execution to the next will be executed. If you submit the same graph twice, only the first will be executed. If you change the last part of the graph, only the part you changed and the part that depends on it will be executed.

This model significantly reduces computational overhead for iterative workflows where users make incremental adjustments.

### Workflow Serialization

Workflows can be saved and loaded as JSON files, enabling persistence and sharing of node graph configurations. Dragging a generated PNG on the webpage or loading one will give the full workflow including seeds that were used to create it, maintaining reproducibility.

## Node Struct Operations

### NodeStruct Definition

```python
class NodeStruct(TypedDict):
    inputs: dict[str, str | int | float | bool | tuple[str, int]]
    class_type: str
    _meta: dict[str, str]
```

### Copy Operations

The `copy_node_struct` function creates modified copies for graph manipulation:

```python
def copy_node_struct(node_struct: NodeStruct, empty_inputs: bool = False) -> NodeStruct:
    new_node_struct = node_struct.copy()
    if empty_inputs:
        new_node_struct["inputs"] = {}
    else:
        new_node_struct["inputs"] = node_struct["inputs"].copy()
    new_node_struct["_meta"] = node_struct["_meta"].copy()
    return new_node_struct
```

资料来源：[app/node_replace_manager.py:16-25]()

## Related Components

| Component | File Path | Purpose |
|-----------|-----------|---------|
| Graph Execution | `comfy_execution/graph.py` | Core graph execution engine |
| Graph Utilities | `comfy_execution/graph_utils.py` | Graph manipulation helpers |
| Node Helpers | `node_helpers.py` | Common node development utilities |
| Node Typing | `comfy/comfy_types/node_typing.py` | Type definitions for nodes |
| User Manager | `app/user_manager.py` | User data and file operations |

## Best Practices

### Node Development

- Use `ComfyNodeABC` as the base class for custom nodes to leverage type-hinting
- Properly define `INPUT_TYPES` with correct type annotations
- Register node replacements idempotently to support hot-reloading

### Workflow Optimization

- Structure workflows to minimize dependencies between unchanged sections
- Use subgraphs for reusable workflow patterns
- Leverage the incremental execution model by making changes at graph endpoints

### Custom Node Packaging

- Place subgraphs in the designated `subgraphs/` directory within custom node packages
- Use the node pack name in `CustomNodeSubgraphEntryInfo` for proper namespacing
- Follow JSON format for subgraph definition files

---

<a id='page-model-loading'></a>

## Model Loading and Detection

### 相关页面

相关主题：[Diffusion Models](#page-diffusion-models), [Memory Management](#page-memory-management)

I do not have access to the source files required to generate this wiki page. The provided context does not contain the following files that are essential for documenting the Model Loading and Detection system:

**Required files that are missing:**

- `comfy/model_detection.py`
- `comfy/model_management.py`
- `comfy/model_patcher.py`
- `comfy/model_base.py`
- `comfy/supported_models.py`
- `comfy/lora.py`
- `folder_paths.py`

The context only includes:

- `README.md` (general documentation)
- `app/user_manager.py` (user data management)
- `app/model_manager.py` (model file manager - partial)
- `app/frontend_management.py` (frontend version management)
- `comfy/comfy_types/README.md` (type hinting documentation)

Without access to the actual source files for model loading and detection, I cannot provide accurate technical details, code citations, or architectural diagrams that would be factual and useful.

To generate this wiki page, I would need the repository to be re-analyzed with the specific files listed above included in the context.

---

<a id='page-diffusion-models'></a>

## Diffusion Models

### 相关页面

相关主题：[Model Loading and Detection](#page-model-loading), [Text Processing and Encoders](#page-text-processing)

<details>
<summary>相关源码文件</summary>

以下源码文件用于生成本页说明：

- [comfy/ldm/modules/diffusionmodules/model.py](https://github.com/Comfy-Org/ComfyUI/blob/main/comfy/ldm/modules/diffusionmodules/model.py)
- [comfy/ldm/modules/diffusionmodules/openaimodel.py](https://github.com/Comfy-Org/ComfyUI/blob/main/comfy/ldm/modules/diffusionmodules/openaimodel.py)
- [comfy/ldm/flux/model.py](https://github.com/Comfy-Org/ComfyUI/blob/main/comfy/ldm/flux/model.py)
- [comfy/ldm/wan/model.py](https://github.com/Comfy-Org/ComfyUI/blob/main/comfy/ldm/wan/model.py)
- [comfy/ldm/hunyuan_video/model.py](https://github.com/Comfy-Org/ComfyUI/blob/main/comfy/ldm/hunyuan_video/model.py)
- [comfy/ldm/cogvideo/model.py](https://github.com/Comfy-Org/ComfyUI/blob/main/comfy/ldm/cogvideo/model.py)
- [comfy/samplers.py](https://github.com/Comfy-Org/ComfyUI/blob/main/comfy/samplers.py)
- [comfy/sample.py](https://github.com/Comfy-Org/ComfyUI/blob/main/comfy/sample.py)
</details>

# Diffusion Models

## Overview

Diffusion models in ComfyUI are probabilistic generative models that learn to reverse a forward diffusion process. By gradually denoising random noise through a learned reverse process, these models generate high-quality images, videos, and audio from latent representations.

ComfyUI implements a modular architecture supporting multiple diffusion model families:

| Model Family | Domain | Primary File |
|--------------|--------|--------------|
| Stable Diffusion | Image | `comfy/ldm/modules/diffusionmodules/model.py` |
| Stable Diffusion XL | Image | `comfy/ldm/modules/diffusionmodules/openaimodel.py` |
| Flux | Image | `comfy/ldm/flux/model.py` |
| Wan | Video | `comfy/ldm/wan/model.py` |
| Hunyuan Video | Video | `comfy/ldm/hunyuan_video/model.py` |
| CogVideo | Video | `comfy/ldm/cogvideo/model.py` |

## Architecture

### Core Diffusion Module Structure

```mermaid
graph TD
    A[Latent Input] --> B[Diffusion Model]
    B --> C[UNet Architecture]
    C --> D[Time Embedding]
    C --> E[Residual Blocks]
    C --> F[Attention Layers]
    D --> G[Denoised Output]
    E --> G
    F --> G
    
    H[Sampler] --> I[Noise Schedule]
    I --> B
    G --> J[VAE Decode]
    J --> K[Final Output]
```

### Supported Model Types

ComfyUI natively supports state-of-the-art open-source diffusion models across multiple domains:

#### Image Generation Models

| Model Type | Description | Documentation Link |
|------------|-------------|-------------------|
| Stable Diffusion 1.5 | Latent diffusion model for image generation | [Examples](https://comfyanonymous.github.io/ComfyUI_examples/) |
| Stable Diffusion XL | Enhanced SD with improved quality | Included in core |
| SDXL Turbo / LCM | Fast convergence models | [LCM Examples](https://comfyanonymous.github.io/ComfyUI_examples/lcm/) |
| Stable Diffusion 3 / Flux | MM-DiT architecture for superior quality | [Flux Examples](https://comfyanonymous.github.io/ComfyUI_examples/flux/) |
| Hunyuan DiT | Tencent's diffusion transformer | Included in core |
| Ollin | Custom high-quality diffusion | Available via community |
| Wan | Wan 2.1 and Wan 2.2 video models | [Wan Examples](https://comfyanonymous.github.io/ComfyUI_examples/wan/) |
| HiDream | Advanced image generation | [HiDream Examples](https://comfyanonymous.github.io/ComfyUI_examples/hidream/) |

资料来源：[README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

#### Video Generation Models

| Model Type | Description | Documentation Link |
|------------|-------------|-------------------|
| Stable Video Diffusion | Frame interpolation and video generation | [Video Examples](https://comfyanonymous.github.io/ComfyUI_examples/video/) |
| Mochi | High-quality video synthesis | [Mochi Examples](https://comfyanonymous.github.io/ComfyUI_examples/mochi/) |
| LTX-Video | Lightweight video diffusion | [LTX Examples](https://comfyanonymous.github.io/ComfyUI_examples/ltxv/) |
| Hunyuan Video | Tencent's video generation | [Hunyuan Examples](https://comfyanonymous.github.io/ComfyUI_examples/hunyuan_video/) |
| Wan 2.1/2.2 | Comprehensive video models | [Wan Examples](https://comfyanonymous.github.io/ComfyUI_examples/wan/) |

资料来源：[README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

#### Audio Models

| Model Type | Description |
|------------|-------------|
| Stable Audio | Audio generation and synthesis |

资料来源：[README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

#### Image Editing Models

| Model Type | Description | Link |
|------------|-------------|------|
| Omnigen 2 | Unified image editing | [Examples](https://comfyanonymous.github.io/ComfyUI_examples/omnigen/) |
| Flux Kontext | In-context image editing | [Examples](https://comfyanonymous.github.io/ComfyUI_examples/flux/#flux-kontext-image-editing-model) |
| HiDream E1.1 | Advanced editing capabilities | [Examples](https://comfyanonymous.github.io/ComfyUI_examples/hidream/#hidream-e11) |
| Qwen Image Edit | Multi-modal editing | [Examples](https://comfyanonymous.github.io/ComfyUI_examples/qwen_image/#edit-model) |

资料来源：[README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

## Model Loading Architecture

### Base Diffusion Model Files

| File | Purpose |
|------|---------|
| `comfy/ldm/modules/diffusionmodules/model.py` | Core SD1.5/SD2.x diffusion model implementation |
| `comfy/ldm/modules/diffusionmodules/openaimodel.py` | SDXL and newer architecture variants |
| `comfy/ldm/flux/model.py` | Flux/MM-DiT architecture implementation |
| `comfy/ldm/wan/model.py` | Wan video diffusion model |
| `comfy/ldm/hunyuan_video/model.py` | Hunyuan video diffusion |
| `comfy/ldm/cogvideo/model.py` | CogVideo model implementation |

### Model Loading Workflow

```mermaid
graph LR
    A[Model Checkpoint] --> B[Model Loader Node]
    B --> C[Load State Dict]
    C --> D[Architecture Detection]
    D --> E{Router}
    E -->|SD 1.5/2.x| F[diffusionmodules/model.py]
    E -->|SDXL| G[diffusionmodules/openaimodel.py]
    E -->|Flux| H[flux/model.py]
    E -->|Video| I[wan/hunyuan/cogvideo/model.py]
```

## Sampling System

### Sampler Implementation

The sampling system is implemented in `comfy/samplers.py` and `comfy/sample.py`.

| Component | File | Function |
|-----------|------|----------|
| SamplerFactory | `comfy/samplers.py` | Creates sampler instances |
| KSampler | `comfy/samplers.py` | Main sampling loop implementation |
| CFGGuider | `comfy/samplers.py` | Classifier-free guidance implementation |
| Sampler | `comfy/sample.py` | Orchestrates the sampling process |

### Sampling Parameters

| Parameter | Type | Description |
|-----------|------|-------------|
| steps | int | Number of denoising steps |
| cfg | float | Classifier-free guidance scale |
| sampler_name | str | Sampler algorithm (e.g., euler, dpmpp_2m) |
| scheduler | str | Noise schedule type |
| denoise | float | Denoising strength (0.0-1.0) |

### Available Samplers

ComfyUI supports multiple sampling algorithms:

| Sampler Category | Algorithms |
|------------------|------------|
| Euler Family | euler, euler_ancestral, euler_a |
| DPM++ | dpmpp_2m, dpmpp_2m_karras, dpmpp_sde, dpmpp_sde_karras |
| DDIM | ddim |
| UniPC | unipc |
| LCM | lcm (for LCM/SDXL-Turbo models) |

### Noise Schedules

| Scheduler | Description |
|-----------|-------------|
| normal | Standard noise schedule |
| karras | Optimized schedule for better quality |
| exponential | Exponential decay schedule |
| simple | Simplified schedule |

## Advanced Features

### Textual Inversion

ComfyUI supports textual inversion embeddings for style and concept customization.

资料来源：[README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

### LoRA Support

| LoRA Type | Description |
|-----------|-------------|
| Regular LoRA | Standard low-rank adaptation |
| LoCon | Location-aware conditioning |
| LoHa | Low-rank Hadamard product adaptation |

资料来源：[README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

### Hypernetworks

Custom hypernetworks can be loaded and applied to modify model behavior.

资料来源：[README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

### ControlNet and T2I-Adapter

Structural guidance for diffusion models through:

| Type | Description |
|------|-------------|
| ControlNet | Conditioning via additional neural networks |
| T2I-Adapter | Lightweight adapters for structure guidance |

资料来源：[README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

## Workflow Composition

### Node Graph Architecture

```mermaid
graph TD
    A[Load Checkpoint] --> B[CLIP Text Encode]
    B --> C[KSampler]
    A --> D[VAE Encode]
    D --> C
    C --> E[VAE Decode]
    E --> F[Save Image]
    
    G[Positive Prompt] --> B
    H[Negative Prompt] --> B
```

### Example Workflows

| Workflow | Purpose | Link |
|----------|---------|------|
| txt2img | Text-to-image generation | [Examples](https://comfyanonymous.github.io/ComfyUI_examples/) |
| img2img | Image-to-image transformation | Included in core |
| Hires Fix | Two-pass upscaling | [Hires Fix](https://comfyanonymous.github.io/ComfyUI_examples/2_pass_txt2img/) |
| Inpainting | Selective regeneration | [Inpaint](https://comfyanonymous.github.io/ComfyUI_examples/inpaint/) |
| Area Composition | Multi-region composition | [Area Composition](https://comfyanonymous.github.io/ComfyUI_examples/area_composition/) |
| Upscale | Super-resolution | [Upscale Models](https://comfyanonymous.github.io/ComfyUI_examples/upscale_models/) |
| Model Merging | Combine model weights | [Model Merging](https://comfyanonymous.github.io/ComfyUI_examples/model_merging/) |
| GLIGEN | Grounded generation | [GLIGEN](https://comfyanonymous.github.io/ComfyUI_examples/gligen/) |

资料来源：[README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

## Performance Optimization

### Latent Preview with TAESD

ComfyUI provides real-time preview capabilities using TAESD (Tiny AutoEncoder for Stable Diffusion):

| Feature | Description |
|---------|-------------|
| Low-res Preview | Default fast latent preview |
| TAESD Preview | High-quality previews |
| --preview-method | CLI flag to select preview method |

To enable TAESD previews:

1. Download decoder files from [taesd repository](https://github.com/madebyollin/taesd/):
   - `taesd_decoder.pth`
   - `taesdxl_decoder.pth`
   - `taesd3_decoder.pth`
   - `taef1_decoder.pth`

2. Place files in `models/vae_approx` directory

3. Launch with `--preview-method taesd`

资料来源：[README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

### GPU Support

| Platform | Installation Command |
|----------|---------------------|
| NVIDIA (CUDA 12.1) | `pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu121` |
| NVIDIA (CUDA 12.4) | `pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu124` |
| NVIDIA (CUDA 12.6) | `pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu126` |
| AMD (ROCm) | `pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.1` |
| Intel (XPU) | `pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/xpu` |
| Apple Silicon | Install PyTorch nightly per [Apple Developer Guide](https://developer.apple.com/metal/pytorch/) |

资料来源：[README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

### Memory Efficient Attention

For AMD GPUs with ROCm, experimental memory efficient attention can be enabled:

```bash
TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL=1 python main.py --use-pytorch-cross-attention
```

For potential speed improvements:

```bash
PYTORCH_TUNABLEOP_ENABLED=1 python main.py
```

资料来源：[README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

## Execution Model

### Partial Graph Execution

ComfyUI's execution engine optimizes diffusion model runs:

> Only parts of the graph that have an output with all the correct inputs will be executed.
> Only parts of the graph that change from each execution to the next will be executed. If you submit the same graph twice, only the first will be executed. If you change the last part of the graph, only the part you changed and the part that depends on it will be re-executed.

资料来源：[README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

### Execution Flow

```mermaid
graph TD
    A[Submit Workflow] --> B[Analyze Dependencies]
    B --> C[Identify Executable Nodes]
    C --> D[Execute Required Nodes]
    D --> E[Cache Results]
    E --> F[Return Outputs]
    
    G[Submit Same Workflow] --> H{Cached?}
    H -->|Yes| I[Skip Execution]
    H -->|No| J[Execute Changed Nodes]
    I --> F
    J --> K[Update Cache]
    K --> F
```

## API Integration

### API Nodes

ComfyUI includes optional API nodes for accessing paid models from external providers through the official [Comfy API](https://docs.comfy.org/tutorials/api-nodes/overview).

To disable API nodes:

```bash
python main.py --disable-api-nodes
```

资料来源：[README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

### Offline Operation

ComfyUI works fully offline for core functionality:

> Works fully offline: core will never download anything unless you want it to.

资料来源：[README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

## Release and Versioning

ComfyUI follows a structured release cycle:

| Release Type | Frequency | Description |
|--------------|-----------|-------------|
| Major Stable | ~Every 2 weeks | New stable versions (e.g., v0.7.0) |
| Patch | As needed | Backported fixes for stable releases |
| Nightly | Daily | Cutting-edge updates from master branch |

> Commits outside of the stable release tags may be very unstable and break many custom nodes.

资料来源：[README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

## See Also

- [ComfyUI Examples Repository](https://comfyanonymous.github.io/ComfyUI_examples/)
- [ComfyUI Documentation](https://docs.comfy.org/)
- [ComfyUI API Documentation](https://docs.comfy.org/tutorials/api-nodes/overview)
- [Comfy Cloud](https://www.comfy.org/cloud)

---

<a id='page-text-processing'></a>

## Text Processing and Encoders

### 相关页面

相关主题：[Diffusion Models](#page-diffusion-models)

<details>
<summary>相关源码文件</summary>

以下源码文件用于生成本页说明：

- [comfy/clip_model.py](https://github.com/Comfy-Org/ComfyUI/blob/main/comfy/clip_model.py)
- [comfy/sd1_clip.py](https://github.com/Comfy-Org/ComfyUI/blob/main/comfy/sd1_clip.py)
- [comfy/sdxl_clip.py](https://github.com/Comfy-Org/ComfyUI/blob/main/comfy/sdxl_clip.py)
- [comfy/text_encoders/flux.py](https://github.com/Comfy-Org/ComfyUI/blob/main/comfy/text_encoders/flux.py)
- [comfy/text_encoders/t5.py](https://github.com/Comfy-Org/ComfyUI/blob/main/comfy/text_encoders/t5.py)
- [comfy/text_encoders/llama.py](https://github.com/Comfy-Org/ComfyUI/blob/main/comfy/text_encoders/llama.py)
- [comfy/sd.py](https://github.com/Comfy-Org/ComfyUI/blob/main/comfy/sd.py)
</details>

# Text Processing and Encoders

## Overview

Text processing and encoding in ComfyUI provides the mechanism to convert human-readable text prompts into numerical representations (embeddings) that can be consumed by diffusion models. This system supports various model architectures including SD1.x, SDXL, Flux, and modern multimodal models.

资料来源：[README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

## Architecture

```mermaid
graph TD
    A[User Text Prompt] --> B[Text Encoding Nodes]
    B --> C[CLIPTextEncode]
    B --> D[CLIP Text Encode Hires]
    B --> E[Model-Specific Encoders]
    C --> F[CLIP Models]
    E --> G[Flux Encoder]
    E --> H[T5 Encoder]
    E --> I[Llama Encoder]
    F --> J[Embedding Tensors]
    G --> J
    H --> J
    I --> J
    J --> K[Diffusion Model]
```

## CLIP Models

The `comfy/clip_model.py` module provides the foundational CLIP model implementation used across different model variants.

资料来源：[comfy/clip_model.py](https://github.com/Comfy-Org/ComfyUI/blob/main/comfy/clip_model.py)

### SD1 CLIP

The SD1 CLIP implementation (`comfy/sd1_clip.py`) handles text encoding for Stable Diffusion 1.x models.

资料来源：[comfy/sd1_clip.py](https://github.com/Comfy-Org/ComfyUI/blob/main/comfy/sd1_clip.py)

### SDXL CLIP

The SDXL CLIP implementation (`comfy/sdxl_clip.py`) extends text encoding capabilities for SDXL models with additional prompt handling.

资料来源：[comfy/sdxl_clip.py](https://github.com/Comfy-Org/ComfyUI/blob/main/comfy/sdxl_clip.py)

## Text Encoders Module

The `comfy/text_encoders/` directory contains specialized encoders for modern model architectures.

资料来源：[comfy/text_encoders/flux.py](https://github.com/Comfy-Org/ComfyUI/blob/main/comfy/text_encoders/flux.py), [comfy/text_encoders/t5.py](https://github.com/Comfy-Org/ComfyUI/blob/main/comfy/text_encoders/t5.py), [comfy/text_encoders/llama.py](https://github.com/Comfy-Org/ComfyUI/blob/main/comfy/text_encoders/llama.py)

### Flux Encoder

Handles text encoding for Flux models, typically combining CLIP and T5 encodings.

### T5 Encoder

Implements T5-based text encoding for models requiring transformer-based text processing.

### Llama Encoder

Provides Llama-based text encoding for advanced text understanding capabilities.

## Embeddings System

ComfyUI supports custom embeddings stored in the `models/embeddings` directory.

资料来源：[README.md](https://github.com/Comfy-Org/ComfyUI/blob/main/README.md)

### Using Custom Embeddings

Embeddings can be referenced in the CLIPTextEncode node using the following syntax:

```
embedding:embedding_filename.pt
```

The `.pt` extension can be omitted when specifying embeddings.

## Model Integration

Text encoders are integrated into the broader model system through `comfy/sd.py`, which coordinates between different model components and their respective encoders.

资料来源：[comfy/sd.py](https://github.com/Comfy-Org/ComfyUI/blob/main/comfy/sd.py)

## Supported Models

| Model Family | Text Encoder(s) | Notes |
|--------------|-----------------|-------|
| SD 1.x | CLIP | Standard text encoding |
| SDXL | CLIP | Dual CLIP support |
| Flux | CLIP + T5 | Combined encoding approach |
| HunyuanDiT | Custom | Model-specific implementation |

## Text Encoding Workflow

```mermaid
graph LR
    A1[Positive Prompt] --> B[CLIPTextEncode]
    A2[Negative Prompt] --> C[CLIPTextEncode]
    B --> D[Positive Embeddings]
    C --> E[Negative Embeddings]
    D --> F[KSampler]
    E --> F
    F --> G[Image Generation]
```

## Node Types

### CLIPTextEncode

The primary node for encoding text prompts into embeddings.

**Input Parameters:**
- `text`: The text prompt to encode
- `clip`: The CLIP model to use for encoding

**Output:**
- `CONDITIONING`: The encoded text representation

### Specialized Encoding Nodes

| Node | Purpose | Use Case |
|------|---------|----------|
| CLIP Text Encode Hires | High-resolution aware encoding | Multi-pass workflows |
| Model-Specific Encode | Architecture-specific handling | Flux, SDXL, etc. |

## Best Practices

1. **Prompt Formatting**: Use proper syntax for weight adjustments (e.g., `(text:1.2)`)
2. **Embedding Loading**: Place custom embeddings in `models/embeddings`
3. **Model Matching**: Ensure text encoder matches the generation model
4. **Batch Processing**: Consider CLIP sequence length limitations

## Related Components

- **Model Management**: `app/model_manager.py` handles loading and caching of text encoder models
- **Type System**: `comfy/comfy_types/` provides type hints for node development including IO types for text processing

---

<a id='page-memory-management'></a>

## Memory Management

### 相关页面

相关主题：[Model Loading and Detection](#page-model-loading), [Execution Engine](#page-execution-engine)

<details>
<summary>相关源码文件</summary>

以下源码文件用于生成本页说明：

- [comfy/memory_management.py](https://github.com/Comfy-Org/ComfyUI/blob/main/comfy/memory_management.py) *(未在当前上下文检索到)*
- [comfy/model_management.py](https://github.com/Comfy-Org/ComfyUI/blob/main/comfy/model_management.py) *(未在当前上下文检索到)*
- [comfy/pinned_memory.py](https://github.com/Comfy-Org/ComfyUI/blob/main/comfy/pinned_memory.py) *(未在当前上下文检索到)*
- [comfy/pixel_space_convert.py](https://github.com/Comfy-Org/ComfyUI/blob/main/comfy/pixel_space_convert.py) *(未在当前上下文检索到)*

**注意**: 当前检索上下文未包含上述内存管理核心源文件。以下内容主要基于 README.md 中公开文档信息及可用上下文推断。实际实现细节请参阅实际源码文件。
</details>

# Memory Management

ComfyUI implements a sophisticated **smart memory management system** that enables efficient execution of large AI models on hardware with limited VRAM. This system is fundamental to ComfyUI's ability to run complex workflows on consumer-grade GPUs.

## Overview

The memory management subsystem in ComfyUI handles the lifecycle of model data in GPU and system memory. Its primary objectives include:

- **Automatic model offloading**: Dynamically moving models between GPU VRAM and system RAM
- **VRAM optimization**: Enabling execution on GPUs with as little as 1GB of VRAM
- **Execution caching**: Storing partial execution results to avoid redundant computation
- **Memory cleanup**: Properly releasing resources when models are no longer needed

资料来源：[README.md]()

## Architecture Overview

```mermaid
graph TD
    A[Workflow Execution] --> B[Memory Manager]
    B --> C{VRAM Available?}
    C -->|Yes| D[Load Model to GPU]
    C -->|No| E[Smart Offloading]
    E --> F[Partial GPU Loading]
    F --> G[System RAM Swap]
    D --> H[Execute Nodes]
    G --> H
    H --> I[Cache Results]
    I --> J[Memory Cleanup]
    J --> K[Free VRAM]
```

## Key Memory Management Features

### Smart Offloading

ComfyUI can automatically run large models on GPUs with limited VRAM through intelligent offloading strategies. When a model exceeds available VRAM, the system selectively keeps portions of the model in GPU memory while swapping other components to system RAM.

### Low VRAM Support

ComfyUI supports execution on GPUs with as low as **1GB VRAM**. This is achieved through:

| VRAM Level | Strategy |
|------------|----------|
| 1GB+ | Full offloading with sequential layer execution |
| 4GB+ | Partial offloading with larger batch sizes |
| 8GB+ | Minimal offloading, models stay loaded |
| 16GB+ | Multiple models can stay in memory simultaneously |

### Execution Optimization

The system implements intelligent execution optimization where:

1. **Only changed graph segments execute** - If you submit the same graph twice, only the first execution runs
2. **Dependency tracking** - Only parts of the graph that depend on changed nodes are re-executed
3. **Partial graph execution** - Only graph segments with all correct inputs are executed

资料来源：[README.md]()

## Model Loading Strategies

ComfyUI supports multiple model formats and loading strategies:

### Supported Model Formats

| Format | Description | Safety |
|--------|-------------|--------|
| `.safetensors` | Safe tensor format, recommended | ✅ Safe |
| `.ckpt` | Checkpoint files | ⚠️ Standard |
| `.pt` / `.pth` | PyTorch state dicts | ⚠️ Legacy |

### Memory-Efficient Loading

The system implements safe loading for all model formats, preventing arbitrary code execution from malicious model files.

## GPU Memory Options

### Command Line Options

ComfyUI provides several command-line options for memory management:

```bash
# CPU-only execution (slowest, works without GPU)
python main.py --cpu

# Force specific GPU device
python main.py --device cuda:0
```

### Preview Method Configuration

For latent preview generation, ComfyUI supports different preview methods that vary in memory usage:

| Method | Quality | Memory Usage | Description |
|--------|---------|--------------|-------------|
| `auto` | Low | Minimal | Default fast latent preview |
| `taesd` | High | Low | TAESD decoder for high-quality previews |

To enable high-quality previews:

```bash
# Download TAESD decoder files to models/vae_approx/
# Then launch with:
python main.py --preview-method taesd
```

资料来源：[README.md]()

## Memory Management Classes

Based on the module structure, the memory management system consists of several key components:

```mermaid
classDiagram
    class MemoryManager {
        +manage_vram()
        +offload_model()
        +load_model()
    }
    class ModelManager {
        +register_model()
        +get_model()
        +unload_model()
    }
    class PinnedMemory {
        +allocate_pinned()
        +transfer_to_device()
        +free_pinned()
    }
    class PixelSpaceConverter {
        +to_latent()
        +to_pixel()
        +convert_tensor()
    }
```

### Module Responsibilities

| Module | Purpose |
|--------|---------|
| `memory_management.py` | Core VRAM management and model placement logic |
| `model_management.py` | Model lifecycle, registration, and caching |
| `pinned_memory.py` | Pinned memory allocation for efficient CPU-GPU transfers |
| `pixel_space_convert.py` | Conversion between pixel and latent image spaces |

## Execution Flow with Memory Management

```mermaid
sequenceDiagram
    participant User
    participant Workflow
    participant MemoryManager
    participant ModelCache
    participant GPU
    participant SystemRAM

    User->>Workflow: Submit Workflow
    Workflow->>MemoryManager: Request Model
    MemoryManager->>ModelCache: Check Cache
    alt Model in Cache
        ModelCache-->>MemoryManager: Return Model Ref
    else Model Not Cached
        MemoryManager->>GPU: Check VRAM
        alt Sufficient VRAM
            GPU-->>MemoryManager: OK
            MemoryManager->>GPU: Load Model
        else Insufficient VRAM
            MemoryManager->>SystemRAM: Offload Parts
            MemoryManager->>GPU: Load Partial Model
        end
    end
    MemoryManager-->>Workflow: Model Ready
    Workflow->>GPU: Execute Nodes
    GPU-->>Workflow: Results
```

## Best Practices

1. **Close unused workflows** - Free memory for new models
2. **Use `.safetensors` format** - Safer and often faster loading
3. **Batch similar operations** - Reduces model loading/unloading cycles
4. **Monitor VRAM usage** - Use system tools to track memory consumption

## Configuration Files

ComfyUI supports model path configuration through `extra_model_paths.yaml`:

```yaml
# Example extra_model_paths.yaml
models_dir: /path/to/models
checkpoints:
  - /custom/checkpoint/path
```

This allows sharing model directories with other Stable Diffusion installations, reducing duplicate storage.

## Related Documentation

- [GPU Requirements](https://github.com/comfyanonymous/ComfyUI/wiki/Which-GPU-should-I-buy-for-ComfyUI) - Hardware recommendations
- [Model Installation](README.md#dependencies) - Setting up models
- [Performance Tuning](#) - Optimization tips

---

---

## Doramagic 踩坑日志

项目：Comfy-Org/ComfyUI

摘要：发现 7 个潜在踩坑项，其中 0 个为 high/blocking；最高优先级：身份坑 - 仓库名和安装名不一致。

## 1. 身份坑 · 仓库名和安装名不一致

- 严重度：medium
- 证据强度：runtime_trace
- 发现：仓库名 `comfyui` 与安装入口 `comfy-cli` 不完全一致。
- 对用户的影响：用户照着仓库名搜索包或照着包名找仓库时容易走错入口。
- 建议检查：在 npm/PyPI/GitHub 上确认包名映射和官方 README 说明。
- 复现命令：`pip install comfy-cli`
- 防护动作：页面必须同时展示 repo 名和真实安装入口，避免用户搜索错包。
- 证据：identity.distribution | github_repo:589831718 | https://github.com/Comfy-Org/ComfyUI | repo=comfyui; install=comfy-cli

## 2. 能力坑 · 能力判断依赖假设

- 严重度：medium
- 证据强度：source_linked
- 发现：README/documentation is current enough for a first validation pass.
- 对用户的影响：假设不成立时，用户拿不到承诺的能力。
- 建议检查：将假设转成下游验证清单。
- 防护动作：假设必须转成验证项；没有验证结果前不能写成事实。
- 证据：capability.assumptions | github_repo:589831718 | https://github.com/Comfy-Org/ComfyUI | README/documentation is current enough for a first validation pass.

## 3. 维护坑 · 维护活跃度未知

- 严重度：medium
- 证据强度：source_linked
- 发现：未记录 last_activity_observed。
- 对用户的影响：新项目、停更项目和活跃项目会被混在一起，推荐信任度下降。
- 建议检查：补 GitHub 最近 commit、release、issue/PR 响应信号。
- 防护动作：维护活跃度未知时，推荐强度不能标为高信任。
- 证据：evidence.maintainer_signals | github_repo:589831718 | https://github.com/Comfy-Org/ComfyUI | last_activity_observed missing

## 4. 安全/权限坑 · 下游验证发现风险项

- 严重度：medium
- 证据强度：source_linked
- 发现：no_demo
- 对用户的影响：下游已经要求复核，不能在页面中弱化。
- 建议检查：进入安全/权限治理复核队列。
- 防护动作：下游风险存在时必须保持 review/recommendation 降级。
- 证据：downstream_validation.risk_items | github_repo:589831718 | https://github.com/Comfy-Org/ComfyUI | no_demo; severity=medium

## 5. 安全/权限坑 · 存在评分风险

- 严重度：medium
- 证据强度：source_linked
- 发现：no_demo
- 对用户的影响：风险会影响是否适合普通用户安装。
- 建议检查：把风险写入边界卡，并确认是否需要人工复核。
- 防护动作：评分风险必须进入边界卡，不能只作为内部分数。
- 证据：risks.scoring_risks | github_repo:589831718 | https://github.com/Comfy-Org/ComfyUI | no_demo; severity=medium

## 6. 维护坑 · issue/PR 响应质量未知

- 严重度：low
- 证据强度：source_linked
- 发现：issue_or_pr_quality=unknown。
- 对用户的影响：用户无法判断遇到问题后是否有人维护。
- 建议检查：抽样最近 issue/PR，判断是否长期无人处理。
- 防护动作：issue/PR 响应未知时，必须提示维护风险。
- 证据：evidence.maintainer_signals | github_repo:589831718 | https://github.com/Comfy-Org/ComfyUI | issue_or_pr_quality=unknown

## 7. 维护坑 · 发布节奏不明确

- 严重度：low
- 证据强度：source_linked
- 发现：release_recency=unknown。
- 对用户的影响：安装命令和文档可能落后于代码，用户踩坑概率升高。
- 建议检查：确认最近 release/tag 和 README 安装命令是否一致。
- 防护动作：发布节奏未知或过期时，安装说明必须标注可能漂移。
- 证据：evidence.maintainer_signals | github_repo:589831718 | https://github.com/Comfy-Org/ComfyUI | release_recency=unknown

<!-- canonical_name: Comfy-Org/ComfyUI; human_manual_source: deepwiki_human_wiki -->
