# peft - Doramagic AI Context Pack

> 定位：安装前体验与判断资产。它帮助宿主 AI 有一个好的开始，但不代表已经安装、执行或验证目标项目。

## 充分原则

- **充分原则，不是压缩原则**：AI Context Pack 应该充分到让宿主 AI 在开工前理解项目价值、能力边界、使用入口、风险和证据来源；它可以分层组织，但不以最短摘要为目标。
- **压缩策略**：只压缩噪声和重复内容，不压缩会影响判断和开工质量的上下文。

## 给宿主 AI 的使用方式

你正在读取 Doramagic 为 peft 编译的 AI Context Pack。请把它当作开工前上下文：帮助用户理解适合谁、能做什么、如何开始、哪些必须安装后验证、风险在哪里。不要声称你已经安装、运行或执行了目标项目。

## Claim 消费规则

- **事实来源**：Repo Evidence + Claim/Evidence Graph；Human Wiki 只提供显著性、术语和叙事结构。
- **事实最低状态**：`supported`
- `supported`：可以作为项目事实使用，但回答中必须引用 claim_id 和证据路径。
- `weak`：只能作为低置信度线索，必须要求用户继续核实。
- `inferred`：只能用于风险提示或待确认问题，不能包装成项目事实。
- `unverified`：不得作为事实使用，应明确说证据不足。
- `contradicted`：必须展示冲突来源，不得替用户强行选择一个版本。

## 它最适合谁

- **想在安装前理解开源项目价值和边界的用户**：当前证据主要来自项目文档。 证据：`README.md` Claim：`clm_0002` supported 0.86

## 它能做什么

- **命令行启动或安装流程**（需要安装后验证）：项目文档中存在可执行命令，真实使用需要在本地或宿主环境中运行这些命令。 证据：`README.md`, `docs/source/install.md` Claim：`clm_0001` supported 0.86

## 怎么开始

- `pip install peft` 证据：`README.md` Claim：`clm_0003` supported 0.86
- `pip install git+https://github.com/huggingface/peft` 证据：`docs/source/install.md` Claim：`clm_0004` supported 0.86
- `git clone https://github.com/huggingface/peft` 证据：`docs/source/install.md` Claim：`clm_0005` supported 0.86
- `pip install -e .[test]` 证据：`docs/source/install.md` Claim：`clm_0006` supported 0.86

## 继续前判断卡

- **当前建议**：先做角色匹配试用
- **为什么**：这个项目更像角色库，核心风险是选错角色或把角色文案当执行能力；先用 Prompt Preview 试角色匹配，再决定是否沙盒导入。

### 30 秒判断

- **现在怎么做**：先做角色匹配试用
- **最小安全下一步**：先用 Prompt Preview 试角色匹配；满意后再隔离导入
- **先别相信**：角色质量和任务匹配不能直接相信。
- **继续会触碰**：角色选择偏差、命令执行、本地环境或项目文件

### 现在可以相信

- **适合人群线索：想在安装前理解开源项目价值和边界的用户**（supported）：有 supported claim 或项目证据支撑，但仍不等于真实安装效果。 证据：`README.md` Claim：`clm_0002` supported 0.86
- **能力存在：命令行启动或安装流程**（supported）：可以相信项目包含这类能力线索；是否适合你的具体任务仍要试用或安装后验证。 证据：`README.md`, `docs/source/install.md` Claim：`clm_0001` supported 0.86
- **存在 Quick Start / 安装命令线索**（supported）：可以相信项目文档出现过启动或安装入口；不要因此直接在主力环境运行。 证据：`README.md` Claim：`clm_0003` supported 0.86

### 现在还不能相信

- **角色质量和任务匹配不能直接相信。**（unverified）：角色库证明有很多角色，不证明每个角色都适合你的具体任务，也不证明角色能产生高质量结果。
- **不能把角色文案当成真实执行能力。**（unverified）：安装前只能判断角色描述和任务画像是否匹配，不能证明它能在宿主 AI 里完成任务。
- **真实输出质量不能在安装前相信。**（unverified）：Prompt Preview 只能展示引导方式，不能证明真实项目中的结果质量。
- **宿主 AI 版本兼容性不能在安装前相信。**（unverified）：Claude、Cursor、Codex、Gemini 等宿主加载规则和版本差异必须在真实环境验证。
- **不会污染现有宿主 AI 行为，不能直接相信。**（inferred）：Skill、plugin、AGENTS/CLAUDE/GEMINI 指令可能改变宿主 AI 的默认行为。
- **可安全回滚不能默认相信。**（unverified）：除非项目明确提供卸载和恢复说明，否则必须先在隔离环境验证。
- **真实安装后是否与用户当前宿主 AI 版本兼容？**（unverified）：兼容性只能通过实际宿主环境验证。
- **项目输出质量是否满足用户具体任务？**（unverified）：安装前预览只能展示流程和边界，不能替代真实评测。

### 继续会触碰什么

- **角色选择偏差**：用户对任务应该由哪个专家角色处理的判断。 原因：选错角色会让 AI 从错误专业视角回答，浪费时间或误导决策。
- **命令执行**：包管理器、网络下载、本地插件目录、项目配置或用户主目录。 原因：运行第一条命令就可能产生环境改动；必须先判断是否值得跑。 证据：`README.md`, `docs/source/install.md`
- **本地环境或项目文件**：安装结果、插件缓存、项目配置或本地依赖目录。 原因：安装前无法证明写入范围和回滚方式，需要隔离验证。 证据：`README.md`, `docs/source/install.md`
- **宿主 AI 上下文**：AI Context Pack、Prompt Preview、Skill 路由、风险规则和项目事实。 原因：导入上下文会影响宿主 AI 后续判断，必须避免把未验证项包装成事实。

### 最小安全下一步

- **先跑 Prompt Preview**：先用交互式试用验证任务画像和角色匹配，不要先导入整套角色库。（适用：任何项目都适用，尤其是输出质量未知时。）
- **只在隔离目录或测试账号试装**：避免安装命令污染主力宿主 AI、真实项目或用户主目录。（适用：存在命令执行、插件配置或本地写入线索时。）
- **安装后只验证一个最小任务**：先验证加载、兼容、输出质量和回滚，再决定是否深用。（适用：准备从试用进入真实工作流时。）

### 退出方式

- **保留安装前状态**：记录原始宿主配置和项目状态，后续才能判断是否可恢复。
- **保留原始角色选择记录**：如果输出偏题，可以回到任务画像阶段重新选择角色，而不是继续沿着错误角色推进。
- **记录安装命令和写入路径**：没有明确卸载说明时，至少要知道哪些目录或配置需要手动清理。
- **如果没有回滚路径，不进入主力环境**：不可回滚是继续前阻断项，不应靠信任或运气继续。

## 哪些只能预览

- 解释项目适合谁和能做什么
- 基于项目文档演示典型对话流程
- 帮助用户判断是否值得安装或继续研究

## 哪些必须安装后验证

- 真实安装 Skill、插件或 CLI
- 执行脚本、修改本地文件或访问外部服务
- 验证真实输出质量、性能和兼容性

## 边界与风险判断卡

- **把安装前预览误认为真实运行**：用户可能高估项目已经完成的配置、权限和兼容性验证。 处理方式：明确区分 prompt_preview_can_do 与 runtime_required。 Claim：`clm_0007` inferred 0.45
- **命令执行会修改本地环境**：安装命令可能写入用户主目录、宿主插件目录或项目配置。 处理方式：先在隔离环境或测试账号中运行。 证据：`README.md`, `docs/source/install.md` Claim：`clm_0008` supported 0.86
- **待确认**：真实安装后是否与用户当前宿主 AI 版本兼容？。原因：兼容性只能通过实际宿主环境验证。
- **待确认**：项目输出质量是否满足用户具体任务？。原因：安装前预览只能展示流程和边界，不能替代真实评测。
- **待确认**：安装命令是否需要网络、权限或全局写入？。原因：这影响企业环境和个人环境的安装风险。

## 开工前工作上下文

### 加载顺序

- 先读取 how_to_use.host_ai_instruction，建立安装前判断资产的边界。
- 读取 claim_graph_summary，确认事实来自 Claim/Evidence Graph，而不是 Human Wiki 叙事。
- 再读取 intended_users、capabilities 和 quick_start_candidates，判断用户是否匹配。
- 需要执行具体任务时，优先查 role_skill_index，再查 evidence_index。
- 遇到真实安装、文件修改、网络访问、性能或兼容性问题时，转入 risk_card 和 boundaries.runtime_required。

### 任务路由

- **命令行启动或安装流程**：先说明这是安装后验证能力，再给出安装前检查清单。 边界：必须真实安装或运行后验证。 证据：`README.md`, `docs/source/install.md` Claim：`clm_0001` supported 0.86

### 上下文规模

- 文件总数：768
- 重要文件覆盖：40/768
- 证据索引条目：80
- 角色 / Skill 条目：79

### 证据不足时的处理

- **missing_evidence**：说明证据不足，要求用户提供目标文件、README 段落或安装后验证记录；不要补全事实。
- **out_of_scope_request**：说明该任务超出当前 AI Context Pack 证据范围，并建议用户先查看 Human Manual 或真实安装后验证。
- **runtime_request**：给出安装前检查清单和命令来源，但不要替用户执行命令或声称已执行。
- **source_conflict**：同时展示冲突来源，标记为待核实，不要强行选择一个版本。

## Prompt Recipes

### 适配判断

- 目标：判断这个项目是否适合用户当前任务。
- 预期输出：适配结论、关键理由、证据引用、安装前可预览内容、必须安装后验证内容、下一步建议。

```text
请基于 peft 的 AI Context Pack，先问我 3 个必要问题，然后判断它是否适合我的任务。回答必须包含：适合谁、能做什么、不能做什么、是否值得安装、证据来自哪里。所有项目事实必须引用 evidence_refs、source_paths 或 claim_id。
```

### 安装前体验

- 目标：让用户在安装前感受核心工作流，同时避免把预览包装成真实能力或营销承诺。
- 预期输出：一段带边界标签的体验剧本、安装后验证清单和谨慎建议；不含真实运行承诺或强营销表述。

```text
请把 peft 当作安装前体验资产，而不是已安装工具或真实运行环境。

请严格输出四段：
1. 先问我 3 个必要问题。
2. 给出一段“体验剧本”：用 [安装前可预览]、[必须安装后验证]、[证据不足] 三种标签展示它可能如何引导工作流。
3. 给出安装后验证清单：列出哪些能力只有真实安装、真实宿主加载、真实项目运行后才能确认。
4. 给出谨慎建议：只能说“值得继续研究/试装”“先补充信息后再判断”或“不建议继续”，不得替项目背书。

硬性边界：
- 不要声称已经安装、运行、执行测试、修改文件或产生真实结果。
- 不要写“自动适配”“确保通过”“完美适配”“强烈建议安装”等承诺性表达。
- 如果描述安装后的工作方式，必须使用“如果安装成功且宿主正确加载 Skill，它可能会……”这种条件句。
- 体验剧本只能写成“示例台词/假设流程”：使用“可能会询问/可能会建议/可能会展示”，不要写“已写入、已生成、已通过、正在运行、正在生成”。
- Prompt Preview 不负责给安装命令；如用户准备试装，只能提示先阅读 Quick Start 和 Risk Card，并在隔离环境验证。
- 所有项目事实必须来自 supported claim、evidence_refs 或 source_paths；inferred/unverified 只能作风险或待确认项。

```

### 角色 / Skill 选择

- 目标：从项目里的角色或 Skill 中挑选最匹配的资产。
- 预期输出：候选角色或 Skill 列表，每项包含适用场景、证据路径、风险边界和是否需要安装后验证。

```text
请读取 role_skill_index，根据我的目标任务推荐 3-5 个最相关的角色或 Skill。每个推荐都要说明适用场景、可能输出、风险边界和 evidence_refs。
```

### 风险预检

- 目标：安装或引入前识别环境、权限、规则冲突和质量风险。
- 预期输出：环境、权限、依赖、许可、宿主冲突、质量风险和未知项的检查清单。

```text
请基于 risk_card、boundaries 和 quick_start_candidates，给我一份安装前风险预检清单。不要替我执行命令，只说明我应该检查什么、为什么检查、失败会有什么影响。
```

### 宿主 AI 开工指令

- 目标：把项目上下文转成一次对话开始前的宿主 AI 指令。
- 预期输出：一段边界明确、证据引用明确、适合复制给宿主 AI 的开工前指令。

```text
请基于 peft 的 AI Context Pack，生成一段我可以粘贴给宿主 AI 的开工前指令。这段指令必须遵守 not_runtime=true，不能声称项目已经安装、运行或产生真实结果。
```


## 角色 / Skill 索引

- 共索引 79 个角色 / Skill / 项目文档条目。

- **Generating the documentation**（project_doc）：<!--- Copyright 2023 The HuggingFace Team. All rights reserved. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/README.md`
- **Contribute to PEFT**（project_doc）：<!--Copyright 2023 The HuggingFace Team. All rights reserved. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/source/developer_guides/contributing.md`
- **Installation**（project_doc）：<!--Copyright 2023 The HuggingFace Team. All rights reserved. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/source/install.md`
- **Quickstart**（project_doc）：<!--- Copyright 2023 The HuggingFace Team. All rights reserved. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`README.md`
- **PEFT Docker images**（project_doc）：Here we store all PEFT Docker images used in our testing infrastructure. We use python 3.11 for now on all our images. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docker/README.md`
- **Comparison of PEFT Methods**（project_doc）：The goal of this project is to provide replicable experiments that produce outcomes allowing us to compare different PEFT methods with one another. This gives you more information to make an informed decision about which methods best fit your use case and what trade-offs to expect. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`method_comparison/README.md`
- **KappaTune Experiment**（project_doc）：This script compares different fine-tuning strategies on a downstream task gsm8k while measuring catastrophic forgetting on a general-knowledge control dataset WikiText . For further details see the KappaTune paper https://arxiv.org/abs/2506.16289 . 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/KappaTune/README.md`
- **AdaMSS Fine-tuning**（project_doc）：AdaMSS Adaptive Matrix Decomposition with Subspace Selection is a parameter-efficient fine-tuning method that decomposes weight matrices using SVD into low-rank subspaces. It uses only ~0.07% of original trainable parameters e.g., 59K for ViT-Base vs 86M full fine-tuning while maintaining competitive performance. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/adamss_finetuning/README.md`
- **Activated LoRA aLoRA**（project_doc）：Introduction Activated LoRA aLoRA is an adapter that selectively activates its weights only after a given invocation sequence, ensuring that hidden states match the base model prior to this point. This allows reusing the base model KVs stored in the KV cache for tokens before the invocation, enabling much faster real-world inference e.g. vLLM when switching between generation with the base model and generation with… 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/alora_finetuning/README.md`
- **BD-LoRA Finetuning**（project_doc）：Block-Diagonal LoRA BD-LoRA is a LoRA variant in which some LoRA factors are constrained to be block-diagonal. This allows faster serving by eliminating communication overheads when running inference on multiple GPU, at the same finetuning performance as vanilla LoRA. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/bdlora_finetuning/README.md`
- **BEFT: Bias-Efficient Fine-Tuning of Language Models in Low-Data Regimes**（project_doc）：BEFT: Bias-Efficient Fine-Tuning of Language Models in Low-Data Regimes 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/beft_finetuning/README.md`
- **CARTRIDGE self-study distillation example**（project_doc）：CARTRIDGE self-study distillation example 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/cartridge_self_study/README.md`
- **CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for Task-Aware Parameter-Efficient Fine-tuning**（project_doc）：CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for Task-Aware Parameter-Efficient Fine-tuning 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/corda_finetuning/README.md`
- **Context-aware Prompt Tuning: Advancing In-Context Learning with Adversarial Methods**（project_doc）：Context-aware Prompt Tuning: Advancing In-Context Learning with Adversarial Methods Introduction Paper https://huggingface.co/papers/2410.17222 , Code https://github.com/tsachiblau/Context-aware-Prompt-Tuning-Advancing-In-Context-Learning-with-Adversarial-Methods , Notebook cpt train and inference.ipynb , Colab https://colab.research.google.com/drive/1UhQDVhZ9bDlSk1551SuJV8tIUmlIayta?usp=sharing 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/cpt_finetuning/README.md`
- **DeLoRA: Decoupled Low-Rank Adaptation**（project_doc）：DeLoRA: Decoupled Low-Rank Adaptation 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/delora_finetuning/README.md`
- **DoRA: Weight-Decomposed Low-Rank Adaptation**（project_doc）：DoRA: Weight-Decomposed Low-Rank Adaptation 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/dora_finetuning/README.md`
- **EVA: Explained Variance Adaptation**（project_doc）：EVA: Explained Variance Adaptation Introduction Paper https://huggingface.co/papers/2410.07170 , code https://github.com/ml-jku/EVA Explained Variance Adaptation EVA is a novel initialization method for LoRA style adapters which initializes adapter weights in a data driven manner and adaptively allocates ranks according to the variance they explain. EVA improves average performance on a multitude of tasks across var… 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/eva_finetuning/README.md`
- **GraLoRA: Granular Low-Rank Adaptation**（project_doc）：GraLoRA: Granular Low-Rank Adaptation 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/gralora_finetuning/README.md`
- **HiRA causal language modeling fine-tuning**（project_doc）：HiRA causal language modeling fine-tuning 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/hira_finetuning/README.md`
- **DreamBooth fine-tuning with HRA**（project_doc）：<!--Copyright 2024 The HuggingFace Team. All rights reserved. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/hra_dreambooth/README.md`
- **Fine-tuning for image classification using LoRA and 🤗 PEFT**（project_doc）：Fine-tuning for image classification using LoRA and 🤗 PEFT 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/image_classification/README.md`
- **Lily: Low-Rank Interconnected Adaptation Across Layers**（project_doc）：Lily: Low-Rank Interconnected Adaptation Across Layers 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/lily_finetuning/README.md`
- **LoftQ: LoRA-fine-tuning-aware Quantization**（project_doc）：LoftQ: LoRA-fine-tuning-aware Quantization 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/loftq_finetuning/README.md`
- **Transformer Engine ESM2 LoRA Fine-Tuning**（project_doc）：Transformer Engine ESM2 LoRA Fine-Tuning 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/lora_finetuning_transformer_engine/README.md`
- **LoRA-GA: Low-Rank Adaptation with Gradient Approximation**（project_doc）：LoRA-GA: Low-Rank Adaptation with Gradient Approximation 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/lora_ga_finetuning/README.md`
- **LoRA-FA: Memory-efficient Low-rank Adaptation for Large Language Models Fine-tuning**（project_doc）：LoRA-FA: Memory-efficient Low-rank Adaptation for Large Language Models Fine-tuning 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/lorafa_finetune/README.md`
- **MiSS: Balancing LoRA Performance and Efficiency with Simple Shard Sharing**（project_doc）：MiSS: Balancing LoRA Performance and Efficiency with Simple Shard Sharing Introduction Paper https://huggingface.co/papers/2409.15371 , code https://github.com/JL-er/MiSS MiSS Matrix Shard Sharing is a novel PEFT method that adopts a low-rank structure, requires only a single trainable matrix, and introduces a new update mechanism distinct from LoRA, achieving an excellent balance between performance and efficiency. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/miss_finetuning/README.md`
- **Fine-tuning a multilayer perceptron using LoRA and 🤗 PEFT**（project_doc）：Fine-tuning a multilayer perceptron using LoRA and 🤗 PEFT 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/multilayer_perceptron/README.md`
- **OLoRA: Orthonormal Low Rank Adaptation of Large Language Models**（project_doc）：OLoRA: Orthonormal Low Rank Adaptation of Large Language Models 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/olora_finetuning/README.md`
- **Orthogonal Subspace Fine-tuning OSF - Continual Learning Example**（project_doc）：Orthogonal Subspace Fine-tuning OSF - Continual Learning Example 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/orthogonal_subspace_learning/README.md`
- **PEANuT: Parameter-Efficient Adaptation with Weight-aware Neural Tweakers**（project_doc）：PEANuT: Parameter-Efficient Adaptation with Weight-aware Neural Tweakers 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/peanut_finetuning/README.md`
- **PiSSA: Principal Singular values and Singular vectors Adaptation**（project_doc）：PiSSA: Principal Singular values and Singular vectors Adaptation Introduction Paper https://huggingface.co/papers/2404.02948 , code https://github.com/GraphPKU/PiSSA PiSSA represents a matrix $W\in\mathbb{R}^{m\times n}$ within the model by the product of two trainable matrices $A \in \mathbb{R}^{m\times r}$ and $B \in \mathbb{R}^{r\times n}$, where $r \ll \min m, n $, plus a residual matrix $W^{res}\in\mathbb{R}^{m… 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/pissa_finetuning/README.md`
- **Efficient Orthogonal Fine-Tuning with Principal Subspace Adaptation PSOFT**（project_doc）：Efficient Orthogonal Fine-Tuning with Principal Subspace Adaptation PSOFT Introduction Paper https://huggingface.co/papers/2505.11235 , code https://github.com/fei407/PSOFT PSOFT aims to preserve the geometric relationships among pre-trained weight column vectors—a core principle of OFT—while achieving a balanced trade-off across parameter, computation, and memory efficiency. Unlike existing OFT variants e.g., OFTv2… 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/psoft_finetuning/README.md`
- **Generating confidence intervals with PVeRA**（project_doc）：Generating confidence intervals with PVeRA 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/pvera/README.md`
- **QALoRA: Quantization-Aware Low-Rank Adaptation**（project_doc）：QALoRA: Quantization-Aware Low-Rank Adaptation 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/qalora_finetuning/README.md`
- **RandLora: Full-rank parameter-efficient fine-tuning of large models**（project_doc）：RandLora: Full-rank parameter-efficient fine-tuning of large models 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/randlora_finetuning/README.md`
- **RoAd: 3-in-1: 2D Rotary Adaptation for Efficient Finetuning, Efficient Batching and Composability**（project_doc）：RoAd: 3-in-1: 2D Rotary Adaptation for Efficient Finetuning, Efficient Batching and Composability 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/road_finetuning/README.md`
- **Fine-tuning for semantic segmentation using LoRA and 🤗 PEFT**（project_doc）：Fine-tuning for semantic segmentation using LoRA and 🤗 PEFT 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/semantic_segmentation/README.md`
- **Supervised Fine-tuning SFT with PEFT**（project_doc）：Supervised Fine-tuning SFT with PEFT In this example, we'll see how to use PEFT https://github.com/huggingface/peft to perform SFT using PEFT on various distributed setups. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/sft/README.md`
- **Sparse High Rank Adapters**（project_doc）：Introduction Sparse High Rank Adapters or SHiRA https://huggingface.co/papers/2406.13175 is an alternate type of adapter and has been found to have significant advantages over the low rank adapters. Specifically, SHiRA achieves better accuracy than LoRA for a variety of vision and language tasks. It also offers simpler and higher quality multi-adapter fusion by significantly reducing concept loss, a common problem f… 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/shira_finetuning/README.md`
- **WaveFT: Wavelet Fine-Tuning**（project_doc）：Introduction WaveFT https://huggingface.co/papers/2505.12532 is a novel parameter-efficient fine-tuning PEFT method that introduces sparse updates in the wavelet domain of residual matrices. Unlike LoRA, which is constrained by discrete low-rank choices, WaveFT enables fine-grained control over the number of trainable parameters by directly learning a sparse set of coefficients in the transformed space. These coeffi… 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/waveft_finetuning/README.md`
- **X-LoRA examples**（project_doc）：Perform inference of an X-LoRA model using the inference engine mistral.rs. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`examples/xlora/README.md`
- **PEFT method comparison on the MetaMathQA and GSM8K datasets**（project_doc）：PEFT method comparison on the MetaMathQA and GSM8K datasets 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`method_comparison/MetaMathQA/README.md`
- **Base Model Inference Caching**（project_doc）：The benchmarking suite uses a separate script, run base.py , to measure base model inference times and save results for reuse. This should be run once per model configuration to avoid redundant computations and ensure consistent baseline metrics for all PEFT experiments. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`method_comparison/text_generation_benchmark/README.md`
- **DeepSpeed**（project_doc）：DeepSpeed https://www.deepspeed.ai/ is a library designed for speed and scale for distributed training of large models with billions of parameters. At its core is the Zero Redundancy Optimizer ZeRO that shards optimizer states ZeRO-1 , gradients ZeRO-2 , and parameters ZeRO-3 across data parallel processes. This drastically reduces memory usage, allowing you to scale your training to billion parameter models. To unl… 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/source/accelerate/deepspeed.md`
- **Fully Sharded Data Parallel**（project_doc）：Fully sharded data parallel https://pytorch.org/docs/stable/fsdp.html FSDP is developed for distributed training of large pretrained models up to 1T parameters. FSDP achieves this by sharding the model parameters, gradients, and optimizer states across data parallel processes and it can also offload sharded model parameters to a CPU. The memory efficiency afforded by FSDP allows you to scale training to larger batch… 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/source/accelerate/fsdp.md`
- **Adapters**（project_doc）：<!--Copyright 2023 The HuggingFace Team. All rights reserved. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/source/conceptual_guides/adapter.md`
- **IA3**（project_doc）：<!--Copyright 2023 The HuggingFace Team. All rights reserved. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/source/conceptual_guides/ia3.md`
- **Orthogonal Finetuning OFT and BOFT**（project_doc）：<!--Copyright 2023 The HuggingFace Team. All rights reserved. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/source/conceptual_guides/oft.md`
- **Soft prompts**（project_doc）：Training large pretrained language models is very time-consuming and compute-intensive. As they continue to grow in size, there is increasing interest in more efficient training methods such as prompting . Prompting primes a frozen pretrained model for a specific downstream task by including a text prompt that describes the task or even demonstrates an example of the task. With prompting, you can avoid fully trainin… 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/source/conceptual_guides/prompting.md`
- **PEFT checkpoint format**（project_doc）：<!--Copyright 2024 The HuggingFace Team. All rights reserved. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/source/developer_guides/checkpoint.md`
- **Custom models**（project_doc）：<!--Copyright 2023 The HuggingFace Team. All rights reserved. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/source/developer_guides/custom_models.md`
- **LoRA**（project_doc）：<!--Copyright 2023 The HuggingFace Team. All rights reserved. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/source/developer_guides/lora.md`
- **Adapter injection**（project_doc）：<!--Copyright 2023 The HuggingFace Team. All rights reserved. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/source/developer_guides/low_level_api.md`
- **Mixed adapter types**（project_doc）：<!--Copyright 2023 The HuggingFace Team. All rights reserved. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/source/developer_guides/mixed_models.md`
- **Model merging**（project_doc）：<!--Copyright 2024 The HuggingFace Team. All rights reserved. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/source/developer_guides/model_merging.md`
- **Quantization**（project_doc）：<!--Copyright 2023 The HuggingFace Team. All rights reserved. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/source/developer_guides/quantization.md`
- **torch.compile**（project_doc）：<!--Copyright 2023 The HuggingFace Team. All rights reserved. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/source/developer_guides/torch_compile.md`
- **Troubleshooting**（project_doc）：<!--Copyright 2023 The HuggingFace Team. All rights reserved. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/source/developer_guides/troubleshooting.md`
- **PEFT**（project_doc）：<!--Copyright 2023 The HuggingFace Team. All rights reserved. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/source/index.md`
- **AdaLoRA**（project_doc）：<!--Copyright 2023 The HuggingFace Team. All rights reserved. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/source/package_reference/adalora.md`
- **AdaMSS**（project_doc）：<!--Copyright 2026 The HuggingFace Team. All rights reserved. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/source/package_reference/adamss.md`
- **LyCORIS**（project_doc）：<!--Copyright 2023 The HuggingFace Team. All rights reserved. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/source/package_reference/adapter_utils.md`
- **AutoPeftModels**（project_doc）：<!--Copyright 2023 The HuggingFace Team. All rights reserved. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/source/package_reference/auto_class.md`
- **BEFT: Bias-Efficient Fine-Tuning of Language Models in Low-Data Regimes**（project_doc）：<!--Copyright 2026 The HuggingFace Team. All rights reserved. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/source/package_reference/beft.md`
- **BOFT**（project_doc）：<!--Copyright 2023 The HuggingFace Team. All rights reserved. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/source/package_reference/boft.md`
- **C3A: Parameter-Efficient Fine-Tuning via Circular Convolution**（project_doc）：<!--Copyright 2025 The HuggingFace Team. All rights reserved. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/source/package_reference/c3a.md`
- **Cartridges**（project_doc）：<!--Copyright 2025 The HuggingFace Team. All rights reserved. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/source/package_reference/cartridges.md`
- **Configuration**（project_doc）：PeftConfigMixin is the base configuration class for storing the adapter configuration of a PeftModel , and PromptLearningConfig is the base configuration class for soft prompt methods p-tuning, prefix tuning, and prompt tuning . These base classes contain methods for saving and loading model configurations from the Hub, specifying the PEFT method to use, type of task to perform, and model configurations like number… 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/source/package_reference/config.md`
- **Context-aware Prompt Tuning: Advancing In-Context Learning with Adversarial Methods**（project_doc）：<!-- Copyright 2024 The HuggingFace Team. All rights reserved. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/source/package_reference/cpt.md`
- **DeLoRA: Decoupled Low-rank Adaptation**（project_doc）：<!--Copyright 2025 The HuggingFace Team. All rights reserved. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/source/package_reference/delora.md`
- **FourierFT: Discrete Fourier Transformation Fine-Tuning**（project_doc）：<!--Copyright 2024 The HuggingFace Team. All rights reserved. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/source/package_reference/fourierft.md`
- **Functions for PEFT integration**（project_doc）：A collection of functions that could be useful for non-PeftModel models, e.g. transformers or diffusers integration 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/source/package_reference/functional.md`
- **GraLoRA**（project_doc）：Granular Low-Rank Adaptation GraLoRA https://huggingface.co/papers/2505.20355 is a PEFT method designed to enhance the expressivity of low-rank adaptation while improving robustness to outlier activations, based on insights from well-known issues in quantization. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/source/package_reference/gralora.md`
- **Helper methods**（project_doc）：A collection of helper functions for PEFT. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/source/package_reference/helpers.md`
- **HiRA**（project_doc）：High-Rank Adaptation HiRA https://openreview.net/pdf?id=TwJrTz9cRS is a PEFT method that extends the LoRA approach by applying an element-wise modulation on the original weight matrix. Instead of adding a low-rank update directly, HiRA computes: 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/source/package_reference/hira.md`
- **Hotswapping adapters**（project_doc）：The idea of hotswapping an adapter is the following: We can already load multiple adapters, e.g. two LoRAs, at the same time. But sometimes, we want to load one LoRA and then replace its weights in-place with the LoRA weights of another adapter. This is now possible the hotswap adapter function. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/source/package_reference/hotswap.md`
- **Bridging The Gap between Low-rank and Orthogonal Adaptation via Householder Reflection Adaptation HRA**（project_doc）：<!--Copyright 2024 The HuggingFace Team. All rights reserved. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/source/package_reference/hra.md`
- **IA3**（project_doc）：<!--Copyright 2023 The HuggingFace Team. All rights reserved. 激活提示：当用户需要理解项目结构、安装方式或边界时参考。 证据：`docs/source/package_reference/ia3.md`

## 证据索引

- 共索引 80 条证据。

- **Generating the documentation**（documentation）：<!--- Copyright 2023 The HuggingFace Team. All rights reserved. 证据：`docs/README.md`
- **Contribute to PEFT**（documentation）：<!--Copyright 2023 The HuggingFace Team. All rights reserved. 证据：`docs/source/developer_guides/contributing.md`
- **Installation**（documentation）：<!--Copyright 2023 The HuggingFace Team. All rights reserved. 证据：`docs/source/install.md`
- **Quickstart**（documentation）：<!--- Copyright 2023 The HuggingFace Team. All rights reserved. 证据：`README.md`
- **PEFT Docker images**（documentation）：Here we store all PEFT Docker images used in our testing infrastructure. We use python 3.11 for now on all our images. 证据：`docker/README.md`
- **Comparison of PEFT Methods**（documentation）：The goal of this project is to provide replicable experiments that produce outcomes allowing us to compare different PEFT methods with one another. This gives you more information to make an informed decision about which methods best fit your use case and what trade-offs to expect. 证据：`method_comparison/README.md`
- **KappaTune Experiment**（documentation）：This script compares different fine-tuning strategies on a downstream task gsm8k while measuring catastrophic forgetting on a general-knowledge control dataset WikiText . For further details see the KappaTune paper https://arxiv.org/abs/2506.16289 . 证据：`examples/KappaTune/README.md`
- **AdaMSS Fine-tuning**（documentation）：AdaMSS Adaptive Matrix Decomposition with Subspace Selection is a parameter-efficient fine-tuning method that decomposes weight matrices using SVD into low-rank subspaces. It uses only ~0.07% of original trainable parameters e.g., 59K for ViT-Base vs 86M full fine-tuning while maintaining competitive performance. 证据：`examples/adamss_finetuning/README.md`
- **Activated LoRA aLoRA**（documentation）：Introduction Activated LoRA aLoRA is an adapter that selectively activates its weights only after a given invocation sequence, ensuring that hidden states match the base model prior to this point. This allows reusing the base model KVs stored in the KV cache for tokens before the invocation, enabling much faster real-world inference e.g. vLLM when switching between generation with the base model and generation with adapters. See the paper https://huggingface.co/papers/2504.12397 for more details. 证据：`examples/alora_finetuning/README.md`
- **BD-LoRA Finetuning**（documentation）：Block-Diagonal LoRA BD-LoRA is a LoRA variant in which some LoRA factors are constrained to be block-diagonal. This allows faster serving by eliminating communication overheads when running inference on multiple GPU, at the same finetuning performance as vanilla LoRA. 证据：`examples/bdlora_finetuning/README.md`
- **BEFT: Bias-Efficient Fine-Tuning of Language Models in Low-Data Regimes**（documentation）：BEFT: Bias-Efficient Fine-Tuning of Language Models in Low-Data Regimes 证据：`examples/beft_finetuning/README.md`
- **CARTRIDGE self-study distillation example**（documentation）：CARTRIDGE self-study distillation example 证据：`examples/cartridge_self_study/README.md`
- **CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for Task-Aware Parameter-Efficient Fine-tuning**（documentation）：CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for Task-Aware Parameter-Efficient Fine-tuning 证据：`examples/corda_finetuning/README.md`
- **Context-aware Prompt Tuning: Advancing In-Context Learning with Adversarial Methods**（documentation）：Context-aware Prompt Tuning: Advancing In-Context Learning with Adversarial Methods Introduction Paper https://huggingface.co/papers/2410.17222 , Code https://github.com/tsachiblau/Context-aware-Prompt-Tuning-Advancing-In-Context-Learning-with-Adversarial-Methods , Notebook cpt train and inference.ipynb , Colab https://colab.research.google.com/drive/1UhQDVhZ9bDlSk1551SuJV8tIUmlIayta?usp=sharing 证据：`examples/cpt_finetuning/README.md`
- **DeLoRA: Decoupled Low-Rank Adaptation**（documentation）：DeLoRA: Decoupled Low-Rank Adaptation 证据：`examples/delora_finetuning/README.md`
- **DoRA: Weight-Decomposed Low-Rank Adaptation**（documentation）：DoRA: Weight-Decomposed Low-Rank Adaptation 证据：`examples/dora_finetuning/README.md`
- **EVA: Explained Variance Adaptation**（documentation）：EVA: Explained Variance Adaptation Introduction Paper https://huggingface.co/papers/2410.07170 , code https://github.com/ml-jku/EVA Explained Variance Adaptation EVA is a novel initialization method for LoRA style adapters which initializes adapter weights in a data driven manner and adaptively allocates ranks according to the variance they explain. EVA improves average performance on a multitude of tasks across various domains, such as Language generation and understanding, Image classification, and Decision Making. 证据：`examples/eva_finetuning/README.md`
- **GraLoRA: Granular Low-Rank Adaptation**（documentation）：GraLoRA: Granular Low-Rank Adaptation 证据：`examples/gralora_finetuning/README.md`
- **HiRA causal language modeling fine-tuning**（documentation）：HiRA causal language modeling fine-tuning 证据：`examples/hira_finetuning/README.md`
- **DreamBooth fine-tuning with HRA**（documentation）：<!--Copyright 2024 The HuggingFace Team. All rights reserved. 证据：`examples/hra_dreambooth/README.md`
- **Fine-tuning for image classification using LoRA and 🤗 PEFT**（documentation）：Fine-tuning for image classification using LoRA and 🤗 PEFT 证据：`examples/image_classification/README.md`
- **Lily: Low-Rank Interconnected Adaptation Across Layers**（documentation）：Lily: Low-Rank Interconnected Adaptation Across Layers 证据：`examples/lily_finetuning/README.md`
- **LoftQ: LoRA-fine-tuning-aware Quantization**（documentation）：LoftQ: LoRA-fine-tuning-aware Quantization 证据：`examples/loftq_finetuning/README.md`
- **Transformer Engine ESM2 LoRA Fine-Tuning**（documentation）：Transformer Engine ESM2 LoRA Fine-Tuning 证据：`examples/lora_finetuning_transformer_engine/README.md`
- **LoRA-GA: Low-Rank Adaptation with Gradient Approximation**（documentation）：LoRA-GA: Low-Rank Adaptation with Gradient Approximation 证据：`examples/lora_ga_finetuning/README.md`
- **LoRA-FA: Memory-efficient Low-rank Adaptation for Large Language Models Fine-tuning**（documentation）：LoRA-FA: Memory-efficient Low-rank Adaptation for Large Language Models Fine-tuning 证据：`examples/lorafa_finetune/README.md`
- **MiSS: Balancing LoRA Performance and Efficiency with Simple Shard Sharing**（documentation）：MiSS: Balancing LoRA Performance and Efficiency with Simple Shard Sharing Introduction Paper https://huggingface.co/papers/2409.15371 , code https://github.com/JL-er/MiSS MiSS Matrix Shard Sharing is a novel PEFT method that adopts a low-rank structure, requires only a single trainable matrix, and introduces a new update mechanism distinct from LoRA, achieving an excellent balance between performance and efficiency. 证据：`examples/miss_finetuning/README.md`
- **Fine-tuning a multilayer perceptron using LoRA and 🤗 PEFT**（documentation）：Fine-tuning a multilayer perceptron using LoRA and 🤗 PEFT 证据：`examples/multilayer_perceptron/README.md`
- **OLoRA: Orthonormal Low Rank Adaptation of Large Language Models**（documentation）：OLoRA: Orthonormal Low Rank Adaptation of Large Language Models 证据：`examples/olora_finetuning/README.md`
- **Orthogonal Subspace Fine-tuning OSF - Continual Learning Example**（documentation）：Orthogonal Subspace Fine-tuning OSF - Continual Learning Example 证据：`examples/orthogonal_subspace_learning/README.md`
- **PEANuT: Parameter-Efficient Adaptation with Weight-aware Neural Tweakers**（documentation）：PEANuT: Parameter-Efficient Adaptation with Weight-aware Neural Tweakers 证据：`examples/peanut_finetuning/README.md`
- **PiSSA: Principal Singular values and Singular vectors Adaptation**（documentation）：PiSSA: Principal Singular values and Singular vectors Adaptation Introduction Paper https://huggingface.co/papers/2404.02948 , code https://github.com/GraphPKU/PiSSA PiSSA represents a matrix $W\in\mathbb{R}^{m\times n}$ within the model by the product of two trainable matrices $A \in \mathbb{R}^{m\times r}$ and $B \in \mathbb{R}^{r\times n}$, where $r \ll \min m, n $, plus a residual matrix $W^{res}\in\mathbb{R}^{m\times n}$ for error correction. Singular value decomposition SVD is employed to factorize $W$, and the principal singular values and vectors of $W$ are utilized to initialize $A$ and $B$. The residual singular values and vectors initialize the residual matrix $W^{res}$, which ke… 证据：`examples/pissa_finetuning/README.md`
- **Efficient Orthogonal Fine-Tuning with Principal Subspace Adaptation PSOFT**（documentation）：Efficient Orthogonal Fine-Tuning with Principal Subspace Adaptation PSOFT Introduction Paper https://huggingface.co/papers/2505.11235 , code https://github.com/fei407/PSOFT PSOFT aims to preserve the geometric relationships among pre-trained weight column vectors—a core principle of OFT—while achieving a balanced trade-off across parameter, computation, and memory efficiency. Unlike existing OFT variants e.g., OFTv2, BOFT, and GOFT that rely on sparsity-based designs, PSOFT adopts a low-rank principal subspace perspective, bridging the gap between LoRA and OFT. PSOFT confines orthogonal fine-tuning to a principal subspace, offering theoretical guarantees via orthogonality constraints on the… 证据：`examples/psoft_finetuning/README.md`
- **Generating confidence intervals with PVeRA**（documentation）：Generating confidence intervals with PVeRA 证据：`examples/pvera/README.md`
- **QALoRA: Quantization-Aware Low-Rank Adaptation**（documentation）：QALoRA: Quantization-Aware Low-Rank Adaptation 证据：`examples/qalora_finetuning/README.md`
- **RandLora: Full-rank parameter-efficient fine-tuning of large models**（documentation）：RandLora: Full-rank parameter-efficient fine-tuning of large models 证据：`examples/randlora_finetuning/README.md`
- **RoAd: 3-in-1: 2D Rotary Adaptation for Efficient Finetuning, Efficient Batching and Composability**（documentation）：RoAd: 3-in-1: 2D Rotary Adaptation for Efficient Finetuning, Efficient Batching and Composability 证据：`examples/road_finetuning/README.md`
- **Fine-tuning for semantic segmentation using LoRA and 🤗 PEFT**（documentation）：Fine-tuning for semantic segmentation using LoRA and 🤗 PEFT 证据：`examples/semantic_segmentation/README.md`
- **Supervised Fine-tuning SFT with PEFT**（documentation）：Supervised Fine-tuning SFT with PEFT In this example, we'll see how to use PEFT https://github.com/huggingface/peft to perform SFT using PEFT on various distributed setups. 证据：`examples/sft/README.md`
- **Sparse High Rank Adapters**（documentation）：Introduction Sparse High Rank Adapters or SHiRA https://huggingface.co/papers/2406.13175 is an alternate type of adapter and has been found to have significant advantages over the low rank adapters. Specifically, SHiRA achieves better accuracy than LoRA for a variety of vision and language tasks. It also offers simpler and higher quality multi-adapter fusion by significantly reducing concept loss, a common problem faced by low rank adapters. SHiRA directly finetunes a small number of the base model's parameters to finetune the model on any adaptation task. 证据：`examples/shira_finetuning/README.md`
- **WaveFT: Wavelet Fine-Tuning**（documentation）：Introduction WaveFT https://huggingface.co/papers/2505.12532 is a novel parameter-efficient fine-tuning PEFT method that introduces sparse updates in the wavelet domain of residual matrices. Unlike LoRA, which is constrained by discrete low-rank choices, WaveFT enables fine-grained control over the number of trainable parameters by directly learning a sparse set of coefficients in the transformed space. These coefficients are then mapped back to the weight domain via the Inverse Discrete Wavelet Transform IDWT , producing high-rank updates without incurring inference overhead. 证据：`examples/waveft_finetuning/README.md`
- **X-LoRA examples**（documentation）：Perform inference of an X-LoRA model using the inference engine mistral.rs. 证据：`examples/xlora/README.md`
- **PEFT method comparison on the MetaMathQA and GSM8K datasets**（documentation）：PEFT method comparison on the MetaMathQA and GSM8K datasets 证据：`method_comparison/MetaMathQA/README.md`
- **Base Model Inference Caching**（documentation）：The benchmarking suite uses a separate script, run base.py , to measure base model inference times and save results for reuse. This should be run once per model configuration to avoid redundant computations and ensure consistent baseline metrics for all PEFT experiments. 证据：`method_comparison/text_generation_benchmark/README.md`
- **License**（source_file）：Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ 证据：`LICENSE`
- **DeepSpeed**（documentation）：DeepSpeed https://www.deepspeed.ai/ is a library designed for speed and scale for distributed training of large models with billions of parameters. At its core is the Zero Redundancy Optimizer ZeRO that shards optimizer states ZeRO-1 , gradients ZeRO-2 , and parameters ZeRO-3 across data parallel processes. This drastically reduces memory usage, allowing you to scale your training to billion parameter models. To unlock even more memory efficiency, ZeRO-Offload reduces GPU compute and memory by leveraging CPU resources during optimization. 证据：`docs/source/accelerate/deepspeed.md`
- **Fully Sharded Data Parallel**（documentation）：Fully sharded data parallel https://pytorch.org/docs/stable/fsdp.html FSDP is developed for distributed training of large pretrained models up to 1T parameters. FSDP achieves this by sharding the model parameters, gradients, and optimizer states across data parallel processes and it can also offload sharded model parameters to a CPU. The memory efficiency afforded by FSDP allows you to scale training to larger batch or model sizes. 证据：`docs/source/accelerate/fsdp.md`
- **Adapters**（documentation）：<!--Copyright 2023 The HuggingFace Team. All rights reserved. 证据：`docs/source/conceptual_guides/adapter.md`
- **IA3**（documentation）：<!--Copyright 2023 The HuggingFace Team. All rights reserved. 证据：`docs/source/conceptual_guides/ia3.md`
- **Orthogonal Finetuning OFT and BOFT**（documentation）：<!--Copyright 2023 The HuggingFace Team. All rights reserved. 证据：`docs/source/conceptual_guides/oft.md`
- **Soft prompts**（documentation）：Training large pretrained language models is very time-consuming and compute-intensive. As they continue to grow in size, there is increasing interest in more efficient training methods such as prompting . Prompting primes a frozen pretrained model for a specific downstream task by including a text prompt that describes the task or even demonstrates an example of the task. With prompting, you can avoid fully training a separate model for each downstream task, and use the same frozen pretrained model instead. This is a lot easier because you can use the same model for several different tasks, and it is significantly more efficient to train and store a smaller set of prompt parameters than to… 证据：`docs/source/conceptual_guides/prompting.md`
- **PEFT checkpoint format**（documentation）：<!--Copyright 2024 The HuggingFace Team. All rights reserved. 证据：`docs/source/developer_guides/checkpoint.md`
- **Custom models**（documentation）：<!--Copyright 2023 The HuggingFace Team. All rights reserved. 证据：`docs/source/developer_guides/custom_models.md`
- **LoRA**（documentation）：<!--Copyright 2023 The HuggingFace Team. All rights reserved. 证据：`docs/source/developer_guides/lora.md`
- **Adapter injection**（documentation）：<!--Copyright 2023 The HuggingFace Team. All rights reserved. 证据：`docs/source/developer_guides/low_level_api.md`
- **Mixed adapter types**（documentation）：<!--Copyright 2023 The HuggingFace Team. All rights reserved. 证据：`docs/source/developer_guides/mixed_models.md`
- **Model merging**（documentation）：<!--Copyright 2024 The HuggingFace Team. All rights reserved. 证据：`docs/source/developer_guides/model_merging.md`
- **Quantization**（documentation）：<!--Copyright 2023 The HuggingFace Team. All rights reserved. 证据：`docs/source/developer_guides/quantization.md`
- **torch.compile**（documentation）：<!--Copyright 2023 The HuggingFace Team. All rights reserved. 证据：`docs/source/developer_guides/torch_compile.md`
- **Troubleshooting**（documentation）：<!--Copyright 2023 The HuggingFace Team. All rights reserved. 证据：`docs/source/developer_guides/troubleshooting.md`
- 其余 20 条证据见 `AI_CONTEXT_PACK.json` 或 `EVIDENCE_INDEX.json`。

## 宿主 AI 必须遵守的规则

- **把本资产当作开工前上下文，而不是运行环境。**：AI Context Pack 只包含证据化项目理解，不包含目标项目的可执行状态。 证据：`docs/README.md`, `docs/source/developer_guides/contributing.md`, `docs/source/install.md`
- **回答用户时区分可预览内容与必须安装后才能验证的内容。**：安装前体验的消费者价值来自降低误装和误判，而不是伪装成真实运行。 证据：`docs/README.md`, `docs/source/developer_guides/contributing.md`, `docs/source/install.md`

## 用户开工前应该回答的问题

- 你准备在哪个宿主 AI 或本地环境中使用它？
- 你只是想先体验工作流，还是准备真实安装？
- 你最在意的是安装成本、输出质量、还是和现有规则的冲突？

## 验收标准

- 所有能力声明都能回指到 evidence_refs 中的文件路径。
- AI_CONTEXT_PACK.md 没有把预览包装成真实运行。
- 用户能在 3 分钟内看懂适合谁、能做什么、如何开始和风险边界。

---

## Doramagic Context Augmentation

下面内容用于强化 Repomix/AI Context Pack 主体。Human Manual 只提供阅读骨架；踩坑日志会被转成宿主 AI 必须遵守的工作约束。

## Human Manual 骨架

使用规则：这里只是项目阅读路线和显著性信号，不是事实权威。具体事实仍必须回到 repo evidence / Claim Graph。

宿主 AI 硬性规则：
- 不得把页标题、章节顺序、摘要或 importance 当作项目事实证据。
- 解释 Human Manual 骨架时，必须明确说它只是阅读路线/显著性信号。
- 能力、安装、兼容性、运行状态和风险判断必须引用 repo evidence、source path 或 Claim Graph。

- **PEFT概述与快速入门**：importance `high`
  - source_paths: README.md, docs/source/quicktour.md, docs/source/install.md, src/peft/__init__.py
- **核心模块与架构**：importance `high`
  - source_paths: src/peft/peft_model.py, src/peft/mapping.py, src/peft/helpers.py, src/peft/auto.py, src/peft/tuners/tuners_utils.py
- **配置系统**：importance `high`
  - source_paths: src/peft/config.py, src/peft/utils/peft_types.py, src/peft/utils/constants.py
- **LoRA及其变体实现**：importance `high`
  - source_paths: src/peft/tuners/lora/__init__.py, src/peft/tuners/lora/config.py, src/peft/tuners/lora/layer.py, src/peft/tuners/lora/model.py, src/peft/tuners/lora/dora.py
- **其他PEFT方法**：importance `medium`
  - source_paths: src/peft/tuners/ia3/__init__.py, src/peft/tuners/prompt_tuning/__init__.py, src/peft/tuners/prefix_tuning/__init__.py, src/peft/tuners/p_tuning/__init__.py, src/peft/tuners/oft/__init__.py
- **高级调谐器与实验性方法**：importance `medium`
  - source_paths: src/peft/tuners/boft/__init__.py, src/peft/tuners/fourierft/__init__.py, src/peft/tuners/waveft/__init__.py, src/peft/tuners/loha/__init__.py, src/peft/tuners/lokr/__init__.py
- **模型合并与融合工具**：importance `medium`
  - source_paths: src/peft/utils/merge_utils.py, src/peft/utils/save_and_load.py, src/peft/mixed_model.py
- **量化支持与加速优化**：importance `medium`
  - source_paths: src/peft/tuners/lora/bnb.py, src/peft/tuners/lora/gptq.py, src/peft/tuners/lora/awq.py, src/peft/tuners/lora/aqlm.py, src/peft/tuners/lora/hqq.py

## Repo Inspection Evidence / 源码检查证据

- repo_clone_verified: true
- repo_inspection_verified: true
- repo_commit: `758cdac51922abbb24b6e772844c0a88bbe1cd7d`
- inspected_files: `pyproject.toml`, `README.md`, `requirements.txt`, `docs/README.md`, `docs/source/index.md`, `docs/source/_config.py`, `docs/source/install.md`, `docs/source/quicktour.md`, `docs/source/_toctree.yml`, `docs/source/developer_guides/model_merging.md`, `docs/source/developer_guides/torch_compile.md`, `docs/source/developer_guides/contributing.md`, `docs/source/developer_guides/mixed_models.md`, `docs/source/developer_guides/troubleshooting.md`, `docs/source/developer_guides/lora.md`, `docs/source/developer_guides/checkpoint.md`, `docs/source/developer_guides/custom_models.md`, `docs/source/developer_guides/low_level_api.md`, `docs/source/developer_guides/quantization.md`, `docs/source/package_reference/fourierft.md`

宿主 AI 硬性规则：
- 没有 repo_clone_verified=true 时，不得声称已经读过源码。
- 没有 repo_inspection_verified=true 时，不得把 README/docs/package 文件判断写成事实。
- 没有 quick_start_verified=true 时，不得声称 Quick Start 已跑通。

## Doramagic Pitfall Constraints / 踩坑约束

这些规则来自 Doramagic 发现、验证或编译过程中的项目专属坑点。宿主 AI 必须把它们当作工作约束，而不是普通说明文字。

### Constraint 1: 来源证据：[BUG] peft 0.19 target_modules (str) use `set`

- Trigger: GitHub 社区证据显示该项目存在一个配置相关的待验证问题：[BUG] peft 0.19 target_modules (str) use `set`
- Host AI rule: 来源问题仍为 open，Pack Agent 需要复核是否仍影响当前版本。
- Why it matters: 可能增加新用户试用和生产接入成本。
- Evidence: community_evidence:github | cevd_bd098228d56f4251949a351ac90335fc | https://github.com/huggingface/peft/issues/3229 | 来源讨论提到 python 相关条件，需在安装/试用前复核。
- Hard boundary: 不要把这个坑点包装成已解决、已验证或可忽略，除非后续验证证据明确证明它已经关闭。

### Constraint 2: 来源证据：Comparison of Different Fine-Tuning Techniques for Conversational AI

- Trigger: GitHub 社区证据显示该项目存在一个安全/权限相关的待验证问题：Comparison of Different Fine-Tuning Techniques for Conversational AI
- Host AI rule: 来源问题仍为 open，Pack Agent 需要复核是否仍影响当前版本。
- Why it matters: 可能影响授权、密钥配置或安全边界。
- Evidence: community_evidence:github | cevd_408252d26b4a4d87b9ca9362c3b4b37b | https://github.com/huggingface/peft/issues/2310 | 来源类型 github_issue 暴露的待验证使用条件。
- Hard boundary: 不要把这个坑点包装成已解决、已验证或可忽略，除非后续验证证据明确证明它已经关闭。

### Constraint 3: 来源证据：Feature Request: Improve offline support for custom architectures in get_peft_model_state_dict

- Trigger: GitHub 社区证据显示该项目存在一个安装相关的待验证问题：Feature Request: Improve offline support for custom architectures in get_peft_model_state_dict
- Host AI rule: 来源显示可能已有修复、规避或版本变化，说明书中必须标注适用版本。
- Why it matters: 可能增加新用户试用和生产接入成本。
- Evidence: community_evidence:github | cevd_32e0990aa35b430bac525df543e75cac | https://github.com/huggingface/peft/issues/3211 | 来源讨论提到 python 相关条件，需在安装/试用前复核。
- Hard boundary: 不要把这个坑点包装成已解决、已验证或可忽略，除非后续验证证据明确证明它已经关闭。

### Constraint 4: 来源证据：0.17.0: SHiRA, MiSS, LoRA for MoE, and more

- Trigger: GitHub 社区证据显示该项目存在一个配置相关的待验证问题：0.17.0: SHiRA, MiSS, LoRA for MoE, and more
- Host AI rule: 来源显示可能已有修复、规避或版本变化，说明书中必须标注适用版本。
- Why it matters: 可能影响升级、迁移或版本选择。
- Evidence: community_evidence:github | cevd_a7ec4779d09a4fcebe0901d73f869bf0 | https://github.com/huggingface/peft/releases/tag/v0.17.0 | 来源讨论提到 python 相关条件，需在安装/试用前复核。
- Hard boundary: 不要把这个坑点包装成已解决、已验证或可忽略，除非后续验证证据明确证明它已经关闭。

### Constraint 5: 来源证据：Applying Dora to o_proj of Meta-Llama-3.1-8B results in NaN

- Trigger: GitHub 社区证据显示该项目存在一个配置相关的待验证问题：Applying Dora to o_proj of Meta-Llama-3.1-8B results in NaN
- Host AI rule: 来源显示可能已有修复、规避或版本变化，说明书中必须标注适用版本。
- Why it matters: 可能增加新用户试用和生产接入成本。
- Evidence: community_evidence:github | cevd_ce144c340d9f40929a6551e9dbca770d | https://github.com/huggingface/peft/issues/2049 | 来源讨论提到 python 相关条件，需在安装/试用前复核。
- Hard boundary: 不要把这个坑点包装成已解决、已验证或可忽略，除非后续验证证据明确证明它已经关闭。

### Constraint 6: 能力判断依赖假设

- Trigger: README/documentation is current enough for a first validation pass.
- Host AI rule: 将假设转成下游验证清单。
- Why it matters: 假设不成立时，用户拿不到承诺的能力。
- Evidence: capability.assumptions | github_repo:570384908 | https://github.com/huggingface/peft | README/documentation is current enough for a first validation pass.
- Hard boundary: 不要把这个坑点包装成已解决、已验证或可忽略，除非后续验证证据明确证明它已经关闭。

### Constraint 7: 来源证据：0.17.1

- Trigger: GitHub 社区证据显示该项目存在一个运行相关的待验证问题：0.17.1
- Host AI rule: 来源显示可能已有修复、规避或版本变化，说明书中必须标注适用版本。
- Why it matters: 可能增加新用户试用和生产接入成本。
- Evidence: community_evidence:github | cevd_cd675dc497c44319af556a2e7059dd95 | https://github.com/huggingface/peft/releases/tag/v0.17.1 | 来源类型 github_release 暴露的待验证使用条件。
- Hard boundary: 不要把这个坑点包装成已解决、已验证或可忽略，除非后续验证证据明确证明它已经关闭。

### Constraint 8: 来源证据：v0.15.1

- Trigger: GitHub 社区证据显示该项目存在一个运行相关的待验证问题：v0.15.1
- Host AI rule: 来源显示可能已有修复、规避或版本变化，说明书中必须标注适用版本。
- Why it matters: 可能增加新用户试用和生产接入成本。
- Evidence: community_evidence:github | cevd_66bfe8be731a44de971b991569f61e57 | https://github.com/huggingface/peft/releases/tag/v0.15.1 | 来源类型 github_release 暴露的待验证使用条件。
- Hard boundary: 不要把这个坑点包装成已解决、已验证或可忽略，除非后续验证证据明确证明它已经关闭。

### Constraint 9: 来源证据：v0.15.2

- Trigger: GitHub 社区证据显示该项目存在一个运行相关的待验证问题：v0.15.2
- Host AI rule: 来源显示可能已有修复、规避或版本变化，说明书中必须标注适用版本。
- Why it matters: 可能增加新用户试用和生产接入成本。
- Evidence: community_evidence:github | cevd_3d5933ee300d4f68bfab2f0440fae679 | https://github.com/huggingface/peft/releases/tag/v0.15.2 | 来源类型 github_release 暴露的待验证使用条件。
- Hard boundary: 不要把这个坑点包装成已解决、已验证或可忽略，除非后续验证证据明确证明它已经关闭。

### Constraint 10: 来源证据：0.16.0: LoRA-FA, RandLoRA, C³A, and much more

- Trigger: GitHub 社区证据显示该项目存在一个维护/版本相关的待验证问题：0.16.0: LoRA-FA, RandLoRA, C³A, and much more
- Host AI rule: 来源显示可能已有修复、规避或版本变化，说明书中必须标注适用版本。
- Why it matters: 可能增加新用户试用和生产接入成本。
- Evidence: community_evidence:github | cevd_5ef66863f7c64b3e9e3ba6a72eaab639 | https://github.com/huggingface/peft/releases/tag/v0.16.0 | 来源类型 github_release 暴露的待验证使用条件。
- Hard boundary: 不要把这个坑点包装成已解决、已验证或可忽略，除非后续验证证据明确证明它已经关闭。
