# vllm - Doramagic AI Context Pack

> 定位：安装前体验与判断资产。它帮助宿主 AI 有一个好的开始，但不代表已经安装、执行或验证目标项目。

## 充分原则

- **充分原则，不是压缩原则**：AI Context Pack 应该充分到让宿主 AI 在开工前理解项目价值、能力边界、使用入口、风险和证据来源；它可以分层组织，但不以最短摘要为目标。
- **压缩策略**：只压缩噪声和重复内容，不压缩会影响判断和开工质量的上下文。

## 给宿主 AI 的使用方式

你正在读取 Doramagic 为 vllm 编译的 AI Context Pack。请把它当作开工前上下文：帮助用户理解适合谁、能做什么、如何开始、哪些必须安装后验证、风险在哪里。不要声称你已经安装、运行或执行了目标项目。

## Claim 消费规则

- **事实来源**：Repo Evidence + Claim/Evidence Graph；Human Wiki 只提供显著性、术语和叙事结构。
- **事实最低状态**：`supported`
- `supported`：可以作为项目事实使用，但回答中必须引用 claim_id 和证据路径。
- `weak`：只能作为低置信度线索，必须要求用户继续核实。
- `inferred`：只能用于风险提示或待确认问题，不能包装成项目事实。
- `unverified`：不得作为事实使用，应明确说证据不足。
- `contradicted`：必须展示冲突来源，不得替用户强行选择一个版本。

## 它最适合谁

- **AI 研究者或研究型 Agent 构建者**：README 明确围绕研究、实验或论文工作流展开。 证据：`README.md` Claim：`clm_0002` supported 0.86

## 它能做什么

- **命令行启动或安装流程**（需要安装后验证）：项目文档中存在可执行命令，真实使用需要在本地或宿主环境中运行这些命令。 证据：`AGENTS.md`, `docs/getting_started/quickstart.md` Claim：`clm_0001` supported 0.86

## 怎么开始

- `pip install --upgrade uv` 证据：`docs/getting_started/quickstart.md` Claim：`clm_0003` unverified 0.25
- `curl http://localhost:8000/v1/models` 证据：`docs/getting_started/quickstart.md` Claim：`clm_0004` unverified 0.25
- `curl http://localhost:8000/v1/completions \` 证据：`docs/getting_started/quickstart.md` Claim：`clm_0005` unverified 0.25
- `curl http://localhost:8000/v1/chat/completions \` 证据：`docs/getting_started/quickstart.md` Claim：`clm_0006` unverified 0.25
- `curl -LsSf https://astral.sh/uv/install.sh | sh` 证据：`AGENTS.md` Claim：`clm_0007` supported 0.86

## 继续前判断卡

- **当前建议**：先做角色匹配试用
- **为什么**：这个项目更像角色库，核心风险是选错角色或把角色文案当执行能力；先用 Prompt Preview 试角色匹配，再决定是否沙盒导入。

### 30 秒判断

- **现在怎么做**：先做角色匹配试用
- **最小安全下一步**：先用 Prompt Preview 试角色匹配；满意后再隔离导入
- **先别相信**：角色质量和任务匹配不能直接相信。
- **继续会触碰**：角色选择偏差、命令执行、宿主 AI 配置

### 现在可以相信

- **适合人群线索：AI 研究者或研究型 Agent 构建者**（supported）：有 supported claim 或项目证据支撑，但仍不等于真实安装效果。 证据：`README.md` Claim：`clm_0002` supported 0.86
- **能力存在：命令行启动或安装流程**（supported）：可以相信项目包含这类能力线索；是否适合你的具体任务仍要试用或安装后验证。 证据：`AGENTS.md`, `docs/getting_started/quickstart.md` Claim：`clm_0001` supported 0.86
- **存在 Quick Start / 安装命令线索**（supported）：可以相信项目文档出现过启动或安装入口；不要因此直接在主力环境运行。 证据：`AGENTS.md` Claim：`clm_0007` supported 0.86

### 现在还不能相信

- **角色质量和任务匹配不能直接相信。**（unverified）：角色库证明有很多角色，不证明每个角色都适合你的具体任务，也不证明角色能产生高质量结果。
- **不能把角色文案当成真实执行能力。**（unverified）：安装前只能判断角色描述和任务画像是否匹配，不能证明它能在宿主 AI 里完成任务。
- **真实输出质量不能在安装前相信。**（unverified）：Prompt Preview 只能展示引导方式，不能证明真实项目中的结果质量。
- **宿主 AI 版本兼容性不能在安装前相信。**（unverified）：Claude、Cursor、Codex、Gemini 等宿主加载规则和版本差异必须在真实环境验证。
- **不会污染现有宿主 AI 行为，不能直接相信。**（inferred）：Skill、plugin、AGENTS/CLAUDE/GEMINI 指令可能改变宿主 AI 的默认行为。 证据：`AGENTS.md`, `CLAUDE.md`
- **可安全回滚不能默认相信。**（unverified）：除非项目明确提供卸载和恢复说明，否则必须先在隔离环境验证。
- **真实安装后是否与用户当前宿主 AI 版本兼容？**（unverified）：兼容性只能通过实际宿主环境验证。
- **项目输出质量是否满足用户具体任务？**（unverified）：安装前预览只能展示流程和边界，不能替代真实评测。

### 继续会触碰什么

- **角色选择偏差**：用户对任务应该由哪个专家角色处理的判断。 原因：选错角色会让 AI 从错误专业视角回答，浪费时间或误导决策。
- **命令执行**：包管理器、网络下载、本地插件目录、项目配置或用户主目录。 原因：运行第一条命令就可能产生环境改动；必须先判断是否值得跑。 证据：`AGENTS.md`, `docs/getting_started/quickstart.md`
- **宿主 AI 配置**：Claude/Codex/Cursor/Gemini/OpenCode 等宿主的 plugin、Skill 或规则加载配置。 原因：宿主配置会改变 AI 后续工作方式，可能和用户已有规则冲突。 证据：`AGENTS.md`, `CLAUDE.md`
- **本地环境或项目文件**：安装结果、插件缓存、项目配置或本地依赖目录。 原因：安装前无法证明写入范围和回滚方式，需要隔离验证。 证据：`AGENTS.md`, `docs/getting_started/quickstart.md`
- **宿主 AI 上下文**：AI Context Pack、Prompt Preview、Skill 路由、风险规则和项目事实。 原因：导入上下文会影响宿主 AI 后续判断，必须避免把未验证项包装成事实。

### 最小安全下一步

- **先跑 Prompt Preview**：先用交互式试用验证任务画像和角色匹配，不要先导入整套角色库。（适用：任何项目都适用，尤其是输出质量未知时。）
- **只在隔离目录或测试账号试装**：避免安装命令污染主力宿主 AI、真实项目或用户主目录。（适用：存在命令执行、插件配置或本地写入线索时。）
- **先备份宿主 AI 配置**：Skill、plugin、规则文件可能改变 Claude/Cursor/Codex 的默认行为。（适用：存在插件 manifest、Skill 或宿主规则入口时。）
- **安装后只验证一个最小任务**：先验证加载、兼容、输出质量和回滚，再决定是否深用。（适用：准备从试用进入真实工作流时。）

### 退出方式

- **保留安装前状态**：记录原始宿主配置和项目状态，后续才能判断是否可恢复。
- **准备移除宿主 plugin / Skill / 规则入口**：如果试装后行为异常，可以把宿主 AI 恢复到试装前状态。
- **保留原始角色选择记录**：如果输出偏题，可以回到任务画像阶段重新选择角色，而不是继续沿着错误角色推进。
- **记录安装命令和写入路径**：没有明确卸载说明时，至少要知道哪些目录或配置需要手动清理。
- **如果没有回滚路径，不进入主力环境**：不可回滚是继续前阻断项，不应靠信任或运气继续。

## 哪些只能预览

- 解释项目适合谁和能做什么
- 基于项目文档演示典型对话流程
- 帮助用户判断是否值得安装或继续研究

## 哪些必须安装后验证

- 真实安装 Skill、插件或 CLI
- 执行脚本、修改本地文件或访问外部服务
- 验证真实输出质量、性能和兼容性

## 边界与风险判断卡

- **把安装前预览误认为真实运行**：用户可能高估项目已经完成的配置、权限和兼容性验证。 处理方式：明确区分 prompt_preview_can_do 与 runtime_required。 Claim：`clm_0008` inferred 0.45
- **命令执行会修改本地环境**：安装命令可能写入用户主目录、宿主插件目录或项目配置。 处理方式：先在隔离环境或测试账号中运行。 证据：`AGENTS.md`, `docs/getting_started/quickstart.md` Claim：`clm_0009` supported 0.86
- **待确认**：真实安装后是否与用户当前宿主 AI 版本兼容？。原因：兼容性只能通过实际宿主环境验证。
- **待确认**：项目输出质量是否满足用户具体任务？。原因：安装前预览只能展示流程和边界，不能替代真实评测。
- **待确认**：安装命令是否需要网络、权限或全局写入？。原因：这影响企业环境和个人环境的安装风险。

## 开工前工作上下文

### 加载顺序

- 先读取 how_to_use.host_ai_instruction，建立安装前判断资产的边界。
- 读取 claim_graph_summary，确认事实来自 Claim/Evidence Graph，而不是 Human Wiki 叙事。
- 再读取 intended_users、capabilities 和 quick_start_candidates，判断用户是否匹配。
- 需要执行具体任务时，优先查 role_skill_index，再查 evidence_index。
- 遇到真实安装、文件修改、网络访问、性能或兼容性问题时，转入 risk_card 和 boundaries.runtime_required。

### 任务路由

- **命令行启动或安装流程**：先说明这是安装后验证能力，再给出安装前检查清单。 边界：必须真实安装或运行后验证。 证据：`AGENTS.md`, `docs/getting_started/quickstart.md` Claim：`clm_0001` supported 0.86

### 上下文规模

- 文件总数：4834
- 重要文件覆盖：40/4834
- 证据索引条目：80
- 角色 / Skill 条目：79

### 证据不足时的处理

- **missing_evidence**：说明证据不足，要求用户提供目标文件、README 段落或安装后验证记录；不要补全事实。
- **out_of_scope_request**：说明该任务超出当前 AI Context Pack 证据范围，并建议用户先查看 Human Manual 或真实安装后验证。
- **runtime_request**：给出安装前检查清单和命令来源，但不要替用户执行命令或声称已执行。
- **source_conflict**：同时展示冲突来源，标记为待核实，不要强行选择一个版本。

## Prompt Recipes

### 适配判断

- 目标：判断这个项目是否适合用户当前任务。
- 预期输出：适配结论、关键理由、证据引用、安装前可预览内容、必须安装后验证内容、下一步建议。

```text
请基于 vllm 的 AI Context Pack，先问我 3 个必要问题，然后判断它是否适合我的任务。回答必须包含：适合谁、能做什么、不能做什么、是否值得安装、证据来自哪里。所有项目事实必须引用 evidence_refs、source_paths 或 claim_id。
```

### 安装前体验

- 目标：让用户在安装前感受核心工作流，同时避免把预览包装成真实能力或营销承诺。
- 预期输出：一段带边界标签的体验剧本、安装后验证清单和谨慎建议；不含真实运行承诺或强营销表述。

```text
请把 vllm 当作安装前体验资产，而不是已安装工具或真实运行环境。

请严格输出四段：
1. 先问我 3 个必要问题。
2. 给出一段“体验剧本”：用 [安装前可预览]、[必须安装后验证]、[证据不足] 三种标签展示它可能如何引导工作流。
3. 给出安装后验证清单：列出哪些能力只有真实安装、真实宿主加载、真实项目运行后才能确认。
4. 给出谨慎建议：只能说“值得继续研究/试装”“先补充信息后再判断”或“不建议继续”，不得替项目背书。

硬性边界：
- 不要声称已经安装、运行、执行测试、修改文件或产生真实结果。
- 不要写“自动适配”“确保通过”“完美适配”“强烈建议安装”等承诺性表达。
- 如果描述安装后的工作方式，必须使用“如果安装成功且宿主正确加载 Skill，它可能会……”这种条件句。
- 体验剧本只能写成“示例台词/假设流程”：使用“可能会询问/可能会建议/可能会展示”，不要写“已写入、已生成、已通过、正在运行、正在生成”。
- Prompt Preview 不负责给安装命令；如用户准备试装，只能提示先阅读 Quick Start 和 Risk Card，并在隔离环境验证。
- 所有项目事实必须来自 supported claim、evidence_refs 或 source_paths；inferred/unverified 只能作风险或待确认项。

```

### 角色 / Skill 选择

- 目标：从项目里的角色或 Skill 中挑选最匹配的资产。
- 预期输出：候选角色或 Skill 列表，每项包含适用场景、证据路径、风险边界和是否需要安装后验证。

```text
请读取 role_skill_index，根据我的目标任务推荐 3-5 个最相关的角色或 Skill。每个推荐都要说明适用场景、可能输出、风险边界和 evidence_refs。
```

### 风险预检

- 目标：安装或引入前识别环境、权限、规则冲突和质量风险。
- 预期输出：环境、权限、依赖、许可、宿主冲突、质量风险和未知项的检查清单。

```text
请基于 risk_card、boundaries 和 quick_start_candidates，给我一份安装前风险预检清单。不要替我执行命令，只说明我应该检查什么、为什么检查、失败会有什么影响。
```

### 宿主 AI 开工指令

- 目标：把项目上下文转成一次对话开始前的宿主 AI 指令。
- 预期输出：一段边界明确、证据引用明确、适合复制给宿主 AI 的开工前指令。

```text
请基于 vllm 的 AI Context Pack，生成一段我可以粘贴给宿主 AI 的开工前指令。这段指令必须遵守 not_runtime=true，不能声称项目已经安装、运行或产生真实结果。
```


## 角色 / Skill 索引

- 共索引 79 个角色 / Skill / 项目文档条目。

- **Welcome to vLLM**（project_doc）：! ./assets/logos/vllm-logo-text-light.png { align="center" alt="vLLM Light" class="logo-light" width="60%" } ! ./assets/logos/vllm-logo-text-dark.png { align="center" alt="vLLM Dark" class="logo-dark" width="60%" } 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/README.md`
- **Summary**（project_doc）：API documentation for vLLM's configuration classes. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/api/README.md`
- **Benchmark Suites**（project_doc）：vLLM provides comprehensive benchmarking tools for performance testing and evaluation: 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/benchmarking/README.md`
- **vLLM CLI Guide**（project_doc）：The vllm command-line tool is used to run and manage vLLM models. You can start by viewing the help message with: 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/cli/README.md`
- **Configuration Options**（project_doc）：This section lists the most common options for running vLLM. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/configuration/README.md`
- **Contributing to vLLM**（project_doc）：Thank you for your interest in contributing to vLLM! Our community is open to everyone and welcomes all kinds of contributions, no matter how small or large. There are several ways you can contribute to the project: 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/contributing/README.md`
- **Summary**（project_doc）：!!! important Many decoder language models can now be automatically loaded using the Transformers modeling backend ../../models/supported models.md transformers without having to implement them in vLLM. See if vllm serve works first! 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/contributing/model/README.md`
- **Examples**（project_doc）：vLLM's examples are organized into the following categories: 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/examples/README.md`
- **Features**（project_doc）：The tables below show mutually exclusive features and the support on some hardware. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/features/README.md`
- **Quantization**（project_doc）：Quantization trades off model precision for smaller memory footprint, allowing large models to be run on a wider range of devices. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/features/quantization/README.md`
- **Speculative Decoding**（project_doc）：This document shows how to use Speculative Decoding https://arxiv.org/pdf/2302.01318 with vLLM to reduce inter-token latency under medium-to-low QPS query per second , memory-bound workloads. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/features/speculative_decoding/README.md`
- **Installation**（project_doc）：vLLM supports the following hardware platforms: 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/getting_started/installation/README.md`
- **Pooling Models**（project_doc）：!!! note We currently support pooling models primarily for convenience. This is not guaranteed to provide any performance improvements over using Hugging Face Transformers or Sentence Transformers directly. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/models/pooling_models/README.md`
- **Weight Transfer**（project_doc）：vLLM provides a pluggable weight transfer system for synchronizing model weights from a training process to the inference engine during reinforcement learning RL workflows. This is essential for RLHF, GRPO, and other online RL methods where the policy model is iteratively updated during training and the updated weights must be reflected in the inference engine for rollout generation. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/training/weight_transfer/README.md`
- **Using vLLM**（project_doc）：First, vLLM must be installed ../getting started/installation/README.md for your chosen device in either a Python or Docker environment. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/usage/README.md`
- **Agent Instructions for vLLM**（project_doc）：These instructions apply to all AI-assisted contributions to vllm-project/vllm . Breaching these guidelines can result in automatic banning. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`AGENTS.md`
- **Claude**（project_doc）： 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`CLAUDE.md`
- **About**（project_doc）：Easy, fast, and cheap LLM serving for everyone 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`README.md`
- **Benchmarks**（project_doc）：This directory used to contain vLLM's benchmark scripts and utilities for performance testing and evaluation. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`benchmarks/README.md`
- **vLLM benchmark suite**（project_doc）：This directory contains a benchmarking suite for developers to run locally and gain clarity on whether their PR improves/degrades vllm's performance. vLLM also maintains a continuous performance benchmark under perf.vllm.ai https://perf.vllm.ai/ , hosted under PyTorch CI HUD. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`.buildkite/performance-benchmarks/README.md`
- **vLLM Attention Benchmarking Suite**（project_doc）：Fast, flexible benchmarking for vLLM attention and MLA backends with an extended batch specification grammar. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`benchmarks/attention_benchmarks/README.md`
- **Automated vLLM Server Parameter Tuning**（project_doc）：Automated vLLM Server Parameter Tuning 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`benchmarks/auto_tune/README.md`
- **DeepSeek DeepGEMM Kernels Benchmark**（project_doc）：DeepSeek DeepGEMM Kernels Benchmark 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`benchmarks/kernels/deepgemm/README.md`
- **Benchmark KV Cache Offloading with Multi-Turn Conversations**（project_doc）：Benchmark KV Cache Offloading with Multi-Turn Conversations 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`benchmarks/multi_turn/README.md`
- **Machete Mixed Precision Cutlass-Based GEMM**（project_doc）：Machete Mixed Precision Cutlass-Based GEMM 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`csrc/quantization/machete/Readme.md`
- **Offline Inference**（project_doc）：The LLM class provides the primary Python interface for doing offline inference, which is interacting with a model without using a separate model inference server. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/basic/offline_inference/README.md`
- **Helm Charts**（project_doc）：This directory contains a Helm chart for deploying the vllm application. The chart includes configurations for deployment, autoscaling, resource management, and more. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/deployment/chart-helm/README.md`
- **Disaggregated Encoder**（project_doc）：These example scripts that demonstrate the disaggregated encoder EPD features of vLLM. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/disaggregated/disaggregated_encoder/README.md`
- **Disaggregated Serving**（project_doc）：This example contains scripts that demonstrate the disaggregated serving features of vLLM. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/disaggregated/disaggregated_serving/README.md`
- **Disaggregated Prefill V1**（project_doc）：This example contains scripts that demonstrate disaggregated prefill in the offline setting of vLLM. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/disaggregated/example_connector/README.md`
- **KV Load Failure Recovery Test**（project_doc）：This example builds upon the example connector example in examples/disaggregated . 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/disaggregated/kv_load_failure_recovery_offline/README.md`
- **LMCache Examples**（project_doc）：This folder demonstrates how to use LMCache for disaggregated prefilling, CPU offloading and KV cache sharing. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/disaggregated/lmcache/README.md`
- **Custom Logits Processors**（project_doc）：This directory contains examples demonstrating how to use custom logits processors with vLLM's offline inference API. Logits processors allow you to modify the model's output distribution before sampling, enabling controlled generation behaviors like token masking, constrained decoding, and custom sampling strategies. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/features/logits_processor/README.md`
- **Offline Inference with the OpenAI Batch file format**（project_doc）：Offline Inference with the OpenAI Batch file format 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/features/openai_batch/README.md`
- **Structured Outputs**（project_doc）：This script demonstrates various structured output capabilities of vLLM's OpenAI-compatible server. It can run individual constraint type or all of them. It supports both streaming responses and concurrent non-streaming requests. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/features/structured_outputs/README.md`
- **Qwen2.5-Omni Offline Inference Examples**（project_doc）：Qwen2.5-Omni Offline Inference Examples 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/generate/multimodal/qwen2_5_omni/README.md`
- **Monitoring Dashboards**（project_doc）：This directory contains monitoring dashboard configurations for vLLM, providing comprehensive observability for your vLLM deployments. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/observability/dashboards/README.md`
- **Grafana Dashboards for vLLM Monitoring**（project_doc）：Grafana Dashboards for vLLM Monitoring 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/observability/dashboards/grafana/README.md`
- **Perses Dashboards for vLLM Monitoring**（project_doc）：Perses Dashboards for vLLM Monitoring 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/observability/dashboards/perses/README.md`
- **Setup OpenTelemetry POC**（project_doc）：Note: The core OpenTelemetry packages opentelemetry-sdk , opentelemetry-api , opentelemetry-exporter-otlp , opentelemetry-semantic-conventions-ai are bundled with vLLM. Manual installation is not required. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/observability/opentelemetry/README.md`
- **Prometheus and Grafana**（project_doc）：This is a simple example that shows you how to connect vLLM metric logging to the Prometheus/Grafana stack. For this example, we launch Prometheus and Grafana via Docker. You can checkout other methods through Prometheus https://prometheus.io/ and Grafana https://grafana.com/ websites. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/observability/prometheus_grafana/README.md`
- **Long Text Embedding with Chunked Processing**（project_doc）：Long Text Embedding with Chunked Processing 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/pooling/embed/openai_embedding_long_text/README.md`
- **compile test folder structure**（project_doc）：- compile/test .py : various unit tests meant for testing particular code path/features. Future tests are most likely added here. New test files added here will be included in CI automatically - compile/fullgraph/ : full model tests, including all tests previously in compile/piecewise. These tests do not target particular features. New test files added here will be included in CI automatically - compile/distributed/… 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`tests/compile/README.md`
- **GPQA Evaluation using GPT-OSS**（project_doc）：This directory contains GPQA evaluation tests using the GPT-OSS evaluation package and vLLM server. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`tests/evals/gpt_oss/README.md`
- **GSM8K Accuracy Evaluation**（project_doc）：This directory contains a replacement for the lm-eval-harness GSM8K evaluation, using an isolated GSM8K script and vLLM server for better performance and control. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`tests/evals/gsm8k/README.md`
- **MRCR Long-Context Accuracy Evaluation**（project_doc）：MRCR Long-Context Accuracy Evaluation 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`tests/evals/mrcr/README.md`
- **EPD Correctness Test**（project_doc）：This test verifies that EPD Encoder-Prefill-Decode disaggregation produces identical outputs to a baseline single instance. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`tests/v1/ec_connector/integration/README.md`
- **Expert parallel kernels**（project_doc）：Large-scale cluster-level expert parallel, as described in the DeepSeek-V3 Technical Report http://arxiv.org/abs/2412.19437 , is an efficient way to deploy sparse MoE models with many experts. However, such deployment requires many components beyond a normal Python package, including system package support and system driver support. It is impossible to bundle all these components into a Python package. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`tools/ep_kernels/README.md`
- **gputrc2graph.py**（project_doc）：This script processes NVIDIA Nsight Systems nsys GPU trace files .nsys-rep with -t cuda tracing enabled, and generates kernel-level summaries and visualizations of GPU and non-GPU time. It is useful for profiling and analyzing nsys profile output. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`tools/profiler/nsys_profile_tools/README.md`
- **Distributed KV cache transfer**（project_doc）：This folder implements distributed KV cache transfer across vLLM instances. Currently the main use case is for disaggregated prefilling. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`vllm/distributed/kv_transfer/README.md`
- **Quantization Kernel Config**（project_doc）：Use scripts under benchmarks/kernels/ to generate these config files. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`vllm/model_executor/layers/quantization/utils/configs/README.md`
- **Experimental Model Runner V2**（project_doc）：This directory contains the new model runner which is under active development. Ping Woosuk Kwon https://github.com/WoosukKwon for any changes. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`vllm/v1/worker/gpu/README.md`
- **Contributing to vLLM**（project_doc）：You may find information about contributing to vLLM on docs.vllm.ai https://docs.vllm.ai/en/latest/contributing . 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`CONTRIBUTING.md`
- **Benchmark CLI**（project_doc）：This section guides you through running benchmark tests with the extensive datasets supported on vLLM. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/benchmarking/cli.md`
- **Performance Dashboard**（project_doc）：The performance dashboard is used to confirm whether new changes improve/degrade performance under various workloads. It is updated by triggering benchmark runs on every commit with both the perf-benchmarks and ready labels, and when a PR is merged into vLLM. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/benchmarking/dashboard.md`
- **Parameter Sweeps**（project_doc）：vllm bench sweep is a suite of commands designed to run benchmarks across multiple configurations and compare them by visualizing the results. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/benchmarking/sweeps.md`
- **vllm bench latency**（project_doc）：--8<-- "docs/generated/argparse/bench latency.inc.md" 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/cli/bench/latency.md`
- **vllm bench mm-processor**（project_doc）：vllm bench mm-processor profiles the multimodal input processor pipeline of vision-language models. It measures per-stage latency from the HuggingFace processor through to the encoder forward pass, helping you identify preprocessing bottlenecks and understand how different image resolutions or item counts affect end-to-end request time. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/cli/bench/mm_processor.md`
- **vllm bench serve**（project_doc）：--8<-- "docs/generated/argparse/bench serve.inc.md" 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/cli/bench/serve.md`
- **vllm bench sweep plot**（project_doc）：--8<-- "docs/generated/argparse/bench sweep plot.inc.md" 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/cli/bench/sweep/plot.md`
- **vllm bench sweep plot pareto**（project_doc）：--8<-- "docs/generated/argparse/bench sweep plot pareto.inc.md" 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/cli/bench/sweep/plot_pareto.md`
- **vllm bench sweep serve**（project_doc）：--8<-- "docs/generated/argparse/bench sweep serve.inc.md" 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/cli/bench/sweep/serve.md`
- **vllm bench sweep serve workload**（project_doc）：--8<-- "docs/generated/argparse/bench sweep serve workload.inc.md" 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/cli/bench/sweep/serve_workload.md`
- **vllm bench throughput**（project_doc）：--8<-- "docs/generated/argparse/bench throughput.inc.md" 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/cli/bench/throughput.md`
- **vllm chat**（project_doc）：--8<-- "docs/generated/argparse/chat.inc.md" 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/cli/chat.md`
- **vllm complete**（project_doc）：--8<-- "docs/generated/argparse/complete.inc.md" 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/cli/complete.md`
- **Json Tip.Inc**（project_doc）：When passing JSON CLI arguments, the following sets of arguments are equivalent: 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/cli/json_tip.inc.md`
- **vllm run-batch**（project_doc）：--8<-- "docs/generated/argparse/run-batch.inc.md" 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/cli/run-batch.md`
- **vllm serve**（project_doc）：--8<-- "docs/generated/argparse/serve.inc.md" 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/cli/serve.md`
- **Contact Us**（project_doc）： 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/community/contact_us.md`
- **Meetups**（project_doc）：We host regular meetups around the world. We will share the project updates from the vLLM team and have guest speakers from the industry to share their experience and insights. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/community/meetups.md`
- **Sponsors**（project_doc）：vLLM is a community project. Our compute resources for development and testing are supported by the following organizations. Thank you for your support! 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/community/sponsors.md`
- **Conserving Memory**（project_doc）：Large models might cause your machine to run out of memory OOM . Here are some options that help alleviate this problem. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/configuration/conserving_memory.md`
- **Engine Arguments**（project_doc）：Engine arguments control the behavior of the vLLM engine. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/configuration/engine_args.md`
- **Environment Variables**（project_doc）：vLLM uses the following environment variables to configure the system: 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/configuration/env_vars.md`
- **Model Resolution**（project_doc）：vLLM loads HuggingFace-compatible models by inspecting the architectures field in config.json of the model repository and finding the corresponding implementation that is registered to vLLM. Nevertheless, our model resolution may fail for the following reasons: 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/configuration/model_resolution.md`
- **Optimization and Tuning**（project_doc）：This guide covers optimization strategies and performance tuning for vLLM V1. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/configuration/optimization.md`
- **Server Arguments**（project_doc）：The vllm serve command is used to launch the OpenAI-compatible server. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/configuration/serve_args.md`
- **CI Failures**（project_doc）：What should I do when a CI job fails on my PR, but I don't think my PR caused the failure? 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/contributing/ci/failures.md`

## 证据索引

- 共索引 80 条证据。

- **Welcome to vLLM**（documentation）：! ./assets/logos/vllm-logo-text-light.png { align="center" alt="vLLM Light" class="logo-light" width="60%" } ! ./assets/logos/vllm-logo-text-dark.png { align="center" alt="vLLM Dark" class="logo-dark" width="60%" } 证据：`docs/README.md`
- **Summary**（documentation）：API documentation for vLLM's configuration classes. 证据：`docs/api/README.md`
- **Benchmark Suites**（documentation）：vLLM provides comprehensive benchmarking tools for performance testing and evaluation: 证据：`docs/benchmarking/README.md`
- **vLLM CLI Guide**（documentation）：The vllm command-line tool is used to run and manage vLLM models. You can start by viewing the help message with: 证据：`docs/cli/README.md`
- **Configuration Options**（documentation）：This section lists the most common options for running vLLM. 证据：`docs/configuration/README.md`
- **Contributing to vLLM**（documentation）：Thank you for your interest in contributing to vLLM! Our community is open to everyone and welcomes all kinds of contributions, no matter how small or large. There are several ways you can contribute to the project: 证据：`docs/contributing/README.md`
- **Summary**（documentation）：!!! important Many decoder language models can now be automatically loaded using the Transformers modeling backend ../../models/supported models.md transformers without having to implement them in vLLM. See if vllm serve works first! 证据：`docs/contributing/model/README.md`
- **Examples**（documentation）：vLLM's examples are organized into the following categories: 证据：`docs/examples/README.md`
- **Features**（documentation）：The tables below show mutually exclusive features and the support on some hardware. 证据：`docs/features/README.md`
- **Quantization**（documentation）：Quantization trades off model precision for smaller memory footprint, allowing large models to be run on a wider range of devices. 证据：`docs/features/quantization/README.md`
- **Speculative Decoding**（documentation）：This document shows how to use Speculative Decoding https://arxiv.org/pdf/2302.01318 with vLLM to reduce inter-token latency under medium-to-low QPS query per second , memory-bound workloads. 证据：`docs/features/speculative_decoding/README.md`
- **Installation**（documentation）：vLLM supports the following hardware platforms: 证据：`docs/getting_started/installation/README.md`
- **Pooling Models**（documentation）：!!! note We currently support pooling models primarily for convenience. This is not guaranteed to provide any performance improvements over using Hugging Face Transformers or Sentence Transformers directly. 证据：`docs/models/pooling_models/README.md`
- **Weight Transfer**（documentation）：vLLM provides a pluggable weight transfer system for synchronizing model weights from a training process to the inference engine during reinforcement learning RL workflows. This is essential for RLHF, GRPO, and other online RL methods where the policy model is iteratively updated during training and the updated weights must be reflected in the inference engine for rollout generation. 证据：`docs/training/weight_transfer/README.md`
- **Using vLLM**（documentation）：First, vLLM must be installed ../getting started/installation/README.md for your chosen device in either a Python or Docker environment. 证据：`docs/usage/README.md`
- **Agent Instructions for vLLM**（documentation）：These instructions apply to all AI-assisted contributions to vllm-project/vllm . Breaching these guidelines can result in automatic banning. 证据：`AGENTS.md`
- **Claude**（documentation）：@AGENTS.md 证据：`CLAUDE.md`
- **About**（documentation）：Easy, fast, and cheap LLM serving for everyone 证据：`README.md`
- **Benchmarks**（documentation）：This directory used to contain vLLM's benchmark scripts and utilities for performance testing and evaluation. 证据：`benchmarks/README.md`
- **vLLM benchmark suite**（documentation）：This directory contains a benchmarking suite for developers to run locally and gain clarity on whether their PR improves/degrades vllm's performance. vLLM also maintains a continuous performance benchmark under perf.vllm.ai https://perf.vllm.ai/ , hosted under PyTorch CI HUD. 证据：`.buildkite/performance-benchmarks/README.md`
- **vLLM Attention Benchmarking Suite**（documentation）：Fast, flexible benchmarking for vLLM attention and MLA backends with an extended batch specification grammar. 证据：`benchmarks/attention_benchmarks/README.md`
- **Automated vLLM Server Parameter Tuning**（documentation）：Automated vLLM Server Parameter Tuning 证据：`benchmarks/auto_tune/README.md`
- **DeepSeek DeepGEMM Kernels Benchmark**（documentation）：DeepSeek DeepGEMM Kernels Benchmark 证据：`benchmarks/kernels/deepgemm/README.md`
- **Benchmark KV Cache Offloading with Multi-Turn Conversations**（documentation）：Benchmark KV Cache Offloading with Multi-Turn Conversations 证据：`benchmarks/multi_turn/README.md`
- **Machete Mixed Precision Cutlass-Based GEMM**（documentation）：Machete Mixed Precision Cutlass-Based GEMM 证据：`csrc/quantization/machete/Readme.md`
- **Offline Inference**（documentation）：The LLM class provides the primary Python interface for doing offline inference, which is interacting with a model without using a separate model inference server. 证据：`examples/basic/offline_inference/README.md`
- **Helm Charts**（documentation）：This directory contains a Helm chart for deploying the vllm application. The chart includes configurations for deployment, autoscaling, resource management, and more. 证据：`examples/deployment/chart-helm/README.md`
- **Disaggregated Encoder**（documentation）：These example scripts that demonstrate the disaggregated encoder EPD features of vLLM. 证据：`examples/disaggregated/disaggregated_encoder/README.md`
- **Disaggregated Serving**（documentation）：This example contains scripts that demonstrate the disaggregated serving features of vLLM. 证据：`examples/disaggregated/disaggregated_serving/README.md`
- **Disaggregated Prefill V1**（documentation）：This example contains scripts that demonstrate disaggregated prefill in the offline setting of vLLM. 证据：`examples/disaggregated/example_connector/README.md`
- **KV Load Failure Recovery Test**（documentation）：This example builds upon the example connector example in examples/disaggregated . 证据：`examples/disaggregated/kv_load_failure_recovery_offline/README.md`
- **LMCache Examples**（documentation）：This folder demonstrates how to use LMCache for disaggregated prefilling, CPU offloading and KV cache sharing. 证据：`examples/disaggregated/lmcache/README.md`
- **Custom Logits Processors**（documentation）：This directory contains examples demonstrating how to use custom logits processors with vLLM's offline inference API. Logits processors allow you to modify the model's output distribution before sampling, enabling controlled generation behaviors like token masking, constrained decoding, and custom sampling strategies. 证据：`examples/features/logits_processor/README.md`
- **Offline Inference with the OpenAI Batch file format**（documentation）：Offline Inference with the OpenAI Batch file format 证据：`examples/features/openai_batch/README.md`
- **Structured Outputs**（documentation）：This script demonstrates various structured output capabilities of vLLM's OpenAI-compatible server. It can run individual constraint type or all of them. It supports both streaming responses and concurrent non-streaming requests. 证据：`examples/features/structured_outputs/README.md`
- **Qwen2.5-Omni Offline Inference Examples**（documentation）：Qwen2.5-Omni Offline Inference Examples 证据：`examples/generate/multimodal/qwen2_5_omni/README.md`
- **Monitoring Dashboards**（documentation）：This directory contains monitoring dashboard configurations for vLLM, providing comprehensive observability for your vLLM deployments. 证据：`examples/observability/dashboards/README.md`
- **Grafana Dashboards for vLLM Monitoring**（documentation）：Grafana Dashboards for vLLM Monitoring 证据：`examples/observability/dashboards/grafana/README.md`
- **Perses Dashboards for vLLM Monitoring**（documentation）：Perses Dashboards for vLLM Monitoring 证据：`examples/observability/dashboards/perses/README.md`
- **Setup OpenTelemetry POC**（documentation）：Note: The core OpenTelemetry packages opentelemetry-sdk , opentelemetry-api , opentelemetry-exporter-otlp , opentelemetry-semantic-conventions-ai are bundled with vLLM. Manual installation is not required. 证据：`examples/observability/opentelemetry/README.md`
- **Prometheus and Grafana**（documentation）：This is a simple example that shows you how to connect vLLM metric logging to the Prometheus/Grafana stack. For this example, we launch Prometheus and Grafana via Docker. You can checkout other methods through Prometheus https://prometheus.io/ and Grafana https://grafana.com/ websites. 证据：`examples/observability/prometheus_grafana/README.md`
- **Long Text Embedding with Chunked Processing**（documentation）：Long Text Embedding with Chunked Processing 证据：`examples/pooling/embed/openai_embedding_long_text/README.md`
- **compile test folder structure**（documentation）：- compile/test .py : various unit tests meant for testing particular code path/features. Future tests are most likely added here. New test files added here will be included in CI automatically - compile/fullgraph/ : full model tests, including all tests previously in compile/piecewise. These tests do not target particular features. New test files added here will be included in CI automatically - compile/distributed/ : tests that require multiple GPUs. New test files added here will NOT be included in CI automatically as these tests generally need to be manually configured to run in runners with particular number/type of GPUs. 证据：`tests/compile/README.md`
- **GPQA Evaluation using GPT-OSS**（documentation）：This directory contains GPQA evaluation tests using the GPT-OSS evaluation package and vLLM server. 证据：`tests/evals/gpt_oss/README.md`
- **GSM8K Accuracy Evaluation**（documentation）：This directory contains a replacement for the lm-eval-harness GSM8K evaluation, using an isolated GSM8K script and vLLM server for better performance and control. 证据：`tests/evals/gsm8k/README.md`
- **MRCR Long-Context Accuracy Evaluation**（documentation）：MRCR Long-Context Accuracy Evaluation 证据：`tests/evals/mrcr/README.md`
- **EPD Correctness Test**（documentation）：This test verifies that EPD Encoder-Prefill-Decode disaggregation produces identical outputs to a baseline single instance. 证据：`tests/v1/ec_connector/integration/README.md`
- **Expert parallel kernels**（documentation）：Large-scale cluster-level expert parallel, as described in the DeepSeek-V3 Technical Report http://arxiv.org/abs/2412.19437 , is an efficient way to deploy sparse MoE models with many experts. However, such deployment requires many components beyond a normal Python package, including system package support and system driver support. It is impossible to bundle all these components into a Python package. 证据：`tools/ep_kernels/README.md`
- **gputrc2graph.py**（documentation）：This script processes NVIDIA Nsight Systems nsys GPU trace files .nsys-rep with -t cuda tracing enabled, and generates kernel-level summaries and visualizations of GPU and non-GPU time. It is useful for profiling and analyzing nsys profile output. 证据：`tools/profiler/nsys_profile_tools/README.md`
- **Distributed KV cache transfer**（documentation）：This folder implements distributed KV cache transfer across vLLM instances. Currently the main use case is for disaggregated prefilling. 证据：`vllm/distributed/kv_transfer/README.md`
- **Quantization Kernel Config**（documentation）：Use scripts under benchmarks/kernels/ to generate these config files. 证据：`vllm/model_executor/layers/quantization/utils/configs/README.md`
- **Experimental Model Runner V2**（documentation）：This directory contains the new model runner which is under active development. Ping Woosuk Kwon https://github.com/WoosukKwon for any changes. 证据：`vllm/v1/worker/gpu/README.md`
- **Contributing to vLLM**（documentation）：You may find information about contributing to vLLM on docs.vllm.ai https://docs.vllm.ai/en/latest/contributing . 证据：`CONTRIBUTING.md`
- **License**（source_file）：Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ 证据：`LICENSE`
- **Benchmark CLI**（documentation）：This section guides you through running benchmark tests with the extensive datasets supported on vLLM. 证据：`docs/benchmarking/cli.md`
- **Performance Dashboard**（documentation）：The performance dashboard is used to confirm whether new changes improve/degrade performance under various workloads. It is updated by triggering benchmark runs on every commit with both the perf-benchmarks and ready labels, and when a PR is merged into vLLM. 证据：`docs/benchmarking/dashboard.md`
- **Parameter Sweeps**（documentation）：vllm bench sweep is a suite of commands designed to run benchmarks across multiple configurations and compare them by visualizing the results. 证据：`docs/benchmarking/sweeps.md`
- **vllm bench latency**（documentation）：--8<-- "docs/generated/argparse/bench latency.inc.md" 证据：`docs/cli/bench/latency.md`
- **vllm bench mm-processor**（documentation）：vllm bench mm-processor profiles the multimodal input processor pipeline of vision-language models. It measures per-stage latency from the HuggingFace processor through to the encoder forward pass, helping you identify preprocessing bottlenecks and understand how different image resolutions or item counts affect end-to-end request time. 证据：`docs/cli/bench/mm_processor.md`
- **vllm bench serve**（documentation）：--8<-- "docs/generated/argparse/bench serve.inc.md" 证据：`docs/cli/bench/serve.md`
- 其余 20 条证据见 `AI_CONTEXT_PACK.json` 或 `EVIDENCE_INDEX.json`。

## 宿主 AI 必须遵守的规则

- **把本资产当作开工前上下文，而不是运行环境。**：AI Context Pack 只包含证据化项目理解，不包含目标项目的可执行状态。 证据：`docs/README.md`, `docs/api/README.md`, `docs/benchmarking/README.md`
- **回答用户时区分可预览内容与必须安装后才能验证的内容。**：安装前体验的消费者价值来自降低误装和误判，而不是伪装成真实运行。 证据：`docs/README.md`, `docs/api/README.md`, `docs/benchmarking/README.md`

## 用户开工前应该回答的问题

- 你准备在哪个宿主 AI 或本地环境中使用它？
- 你只是想先体验工作流，还是准备真实安装？
- 你最在意的是安装成本、输出质量、还是和现有规则的冲突？

## 验收标准

- 所有能力声明都能回指到 evidence_refs 中的文件路径。
- AI_CONTEXT_PACK.md 没有把预览包装成真实运行。
- 用户能在 3 分钟内看懂适合谁、能做什么、如何开始和风险边界。

---

## Doramagic Context Augmentation

下面内容用于强化 Repomix/AI Context Pack 主体。Human Manual 只提供阅读骨架；踩坑日志会被转成宿主 AI 必须遵守的工作约束。

## Human Manual 骨架

使用规则：这里只是项目阅读路线和显著性信号，不是事实权威。具体事实仍必须回到 repo evidence / Claim Graph。

宿主 AI 硬性规则：
- 不得把页标题、章节顺序、摘要或 importance 当作项目事实证据。
- 解释 Human Manual 骨架时，必须明确说它只是阅读路线/显著性信号。
- 能力、安装、兼容性、运行状态和风险判断必须引用 repo evidence、source path 或 Claim Graph。

- **vLLM Overview**：importance `high`
  - source_paths: README.md, pyproject.toml, vllm/__init__.py, vllm/version.py
- **Getting Started**：importance `high`
  - source_paths: docs/getting_started/installation/README.md, docs/getting_started/quickstart.md, requirements/common.txt, requirements/cuda.txt, setup.py
- **Core Engine Architecture**：importance `high`
  - source_paths: vllm/engine/llm_engine.py, vllm/engine/async_llm_engine.py, vllm/v1/engine/core.py, vllm/v1/engine/async_llm.py, vllm/v1/engine/llm_engine.py
- **Model Executor and Worker Architecture**：importance `high`
  - source_paths: vllm/v1/worker/gpu_model_runner.py, vllm/v1/worker/worker_base.py, vllm/v1/worker/gpu_worker.py, vllm/model_executor/model_loader/__init__.py, vllm/model_executor/models/__init__.py
- **Scheduling and Request Processing**：importance `high`
  - source_paths: vllm/v1/core/sched/scheduler.py, vllm/v1/core/sched/request_queue.py, vllm/v1/core/sched/async_scheduler.py, vllm/v1/request.py, vllm/config/scheduler.py
- **PagedAttention and KV Cache Management**：importance `high`
  - source_paths: csrc/attention/paged_attention_v1.cu, csrc/attention/paged_attention_v2.cu, vllm/v1/core/kv_cache_manager.py, vllm/v1/core/block_pool.py, docs/design/paged_attention.md
- **Attention Backends and Kernels**：importance `medium`
  - source_paths: vllm/v1/attention/backends/flash_attn.py, vllm/v1/attention/backends/flashinfer.py, vllm/v1/attention/backends/mla/flashmla.py, vllm/v1/attention/backends/registry.py, docs/design/attention_backends.md
- **Quantization Support**：importance `medium`
  - source_paths: vllm/model_executor/layers/quantization/fp8.py, vllm/model_executor/layers/quantization/base_config.py, vllm/model_executor/layers/quantization/gguf.py, docs/features/quantization/README.md, csrc/quantization

## Repo Inspection Evidence / 源码检查证据

- repo_clone_verified: true
- repo_inspection_verified: true
- repo_commit: `bd9dbe60601c986b50260f299fe279d057d7d89f`
- inspected_files: `pyproject.toml`, `README.md`, `docs/.nav.yml`, `docs/README.md`, `docs/configuration/serve_args.md`, `docs/configuration/optimization.md`, `docs/configuration/conserving_memory.md`, `docs/configuration/model_resolution.md`, `docs/configuration/README.md`, `docs/configuration/env_vars.md`, `docs/configuration/engine_args.md`, `docs/cli/run-batch.md`, `docs/cli/chat.md`, `docs/cli/complete.md`, `docs/cli/.nav.yml`, `docs/cli/json_tip.inc.md`, `docs/cli/.meta.yml`, `docs/cli/README.md`, `docs/cli/serve.md`, `docs/api/README.md`

宿主 AI 硬性规则：
- 没有 repo_clone_verified=true 时，不得声称已经读过源码。
- 没有 repo_inspection_verified=true 时，不得把 README/docs/package 文件判断写成事实。
- 没有 quick_start_verified=true 时，不得声称 Quick Start 已跑通。

## Doramagic Pitfall Constraints / 踩坑约束

这些规则来自 Doramagic 发现、验证或编译过程中的项目专属坑点。宿主 AI 必须把它们当作工作约束，而不是普通说明文字。

### Constraint 1: 来源证据：[Bug]: Qwen3.5-397B-NVFP4 Disagg accuracy gsm8k collapses with async scheduling

- Trigger: GitHub 社区证据显示该项目存在一个安装相关的待验证问题：[Bug]: Qwen3.5-397B-NVFP4 Disagg accuracy gsm8k collapses with async scheduling
- Host AI rule: 来源问题仍为 open，Pack Agent 需要复核是否仍影响当前版本。
- Why it matters: 可能增加新用户试用和生产接入成本。
- Evidence: community_evidence:github | cevd_1a71634c530044a68b9160080d55de0a | https://github.com/vllm-project/vllm/issues/42182 | 来源讨论提到 python 相关条件，需在安装/试用前复核。
- Hard boundary: 不要把这个坑点包装成已解决、已验证或可忽略，除非后续验证证据明确证明它已经关闭。

### Constraint 2: 来源证据：[Bug]: vLLM v1 with prefix caching: first request differs from subsequent identical requests at temperature=0

- Trigger: GitHub 社区证据显示该项目存在一个安装相关的待验证问题：[Bug]: vLLM v1 with prefix caching: first request differs from subsequent identical requests at temperature=0
- Host AI rule: 来源问题仍为 open，Pack Agent 需要复核是否仍影响当前版本。
- Why it matters: 可能增加新用户试用和生产接入成本。
- Evidence: community_evidence:github | cevd_58327949a4524ed082bd189b53f713a1 | https://github.com/vllm-project/vllm/issues/40896 | 来源讨论提到 python 相关条件，需在安装/试用前复核。
- Hard boundary: 不要把这个坑点包装成已解决、已验证或可忽略，除非后续验证证据明确证明它已经关闭。

### Constraint 3: 来源证据：[Usage]: How to proactively clear CPU-resident memory left behind by unloaded LoRA adapters after calling `/v1/unload_l…

- Trigger: GitHub 社区证据显示该项目存在一个安装相关的待验证问题：[Usage]: How to proactively clear CPU-resident memory left behind by unloaded LoRA adapters after calling `/v1/unload_lora_adapter`?
- Host AI rule: 来源问题仍为 open，Pack Agent 需要复核是否仍影响当前版本。
- Why it matters: 可能增加新用户试用和生产接入成本。
- Evidence: community_evidence:github | cevd_fb1461834fe34049bd05182574d3e5e5 | https://github.com/vllm-project/vllm/issues/42207 | 来源讨论提到 docker 相关条件，需在安装/试用前复核。
- Hard boundary: 不要把这个坑点包装成已解决、已验证或可忽略，除非后续验证证据明确证明它已经关闭。

### Constraint 4: 来源证据：v0.18.1

- Trigger: GitHub 社区证据显示该项目存在一个安装相关的待验证问题：v0.18.1
- Host AI rule: 来源显示可能已有修复、规避或版本变化，说明书中必须标注适用版本。
- Why it matters: 可能增加新用户试用和生产接入成本。
- Evidence: community_evidence:github | cevd_317a03f9de4e459f9be42064c7318b2c | https://github.com/vllm-project/vllm/releases/tag/v0.18.1 | 来源讨论提到 python 相关条件，需在安装/试用前复核。
- Hard boundary: 不要把这个坑点包装成已解决、已验证或可忽略，除非后续验证证据明确证明它已经关闭。

### Constraint 5: 来源证据：[Feature]: Qwen3.5-Moe LoRA Support (experts)

- Trigger: GitHub 社区证据显示该项目存在一个能力理解相关的待验证问题：[Feature]: Qwen3.5-Moe LoRA Support (experts)
- Host AI rule: 来源问题仍为 open，Pack Agent 需要复核是否仍影响当前版本。
- Why it matters: 可能增加新用户试用和生产接入成本。
- Evidence: community_evidence:github | cevd_2d068d43c6654f3cab6b48bf98dad116 | https://github.com/vllm-project/vllm/issues/40005 | 来源类型 github_issue 暴露的待验证使用条件。
- Hard boundary: 不要把这个坑点包装成已解决、已验证或可忽略，除非后续验证证据明确证明它已经关闭。

### Constraint 6: 能力判断依赖假设

- Trigger: README/documentation is current enough for a first validation pass.
- Host AI rule: 将假设转成下游验证清单。
- Why it matters: 假设不成立时，用户拿不到承诺的能力。
- Evidence: capability.assumptions | github_repo:599547518 | https://github.com/vllm-project/vllm | README/documentation is current enough for a first validation pass.
- Hard boundary: 不要把这个坑点包装成已解决、已验证或可忽略，除非后续验证证据明确证明它已经关闭。

### Constraint 7: 来源证据：v0.20.2

- Trigger: GitHub 社区证据显示该项目存在一个运行相关的待验证问题：v0.20.2
- Host AI rule: 来源显示可能已有修复、规避或版本变化，说明书中必须标注适用版本。
- Why it matters: 可能增加新用户试用和生产接入成本。
- Evidence: community_evidence:github | cevd_ecf37722dff6494c82b384225e34bcb0 | https://github.com/vllm-project/vllm/releases/tag/v0.20.2 | 来源类型 github_release 暴露的待验证使用条件。
- Hard boundary: 不要把这个坑点包装成已解决、已验证或可忽略，除非后续验证证据明确证明它已经关闭。

### Constraint 8: 维护活跃度未知

- Trigger: 未记录 last_activity_observed。
- Host AI rule: 补 GitHub 最近 commit、release、issue/PR 响应信号。
- Why it matters: 新项目、停更项目和活跃项目会被混在一起，推荐信任度下降。
- Evidence: evidence.maintainer_signals | github_repo:599547518 | https://github.com/vllm-project/vllm | last_activity_observed missing
- Hard boundary: 不要把这个坑点包装成已解决、已验证或可忽略，除非后续验证证据明确证明它已经关闭。

### Constraint 9: 下游验证发现风险项

- Trigger: no_demo
- Host AI rule: 进入安全/权限治理复核队列。
- Why it matters: 下游已经要求复核，不能在页面中弱化。
- Evidence: downstream_validation.risk_items | github_repo:599547518 | https://github.com/vllm-project/vllm | no_demo; severity=medium
- Hard boundary: 不要把这个坑点包装成已解决、已验证或可忽略，除非后续验证证据明确证明它已经关闭。

### Constraint 10: 存在安全注意事项

- Trigger: No sandbox install has been executed yet; downstream must verify before user use.
- Host AI rule: 转成明确权限清单和安全审查提示。
- Why it matters: 用户安装前需要知道权限边界和敏感操作。
- Evidence: risks.safety_notes | github_repo:599547518 | https://github.com/vllm-project/vllm | No sandbox install has been executed yet; downstream must verify before user use.
- Hard boundary: 不要把这个坑点包装成已解决、已验证或可忽略，除非后续验证证据明确证明它已经关闭。
