Match the project to your task before installing it.
Software Development & Delivery 路 Preview
peft
馃 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Check whether this project matches your task before installing it.
What it can doprompt, recipe, host_instruction, eval, preflightReview the portable capability path.
Before continuingVerify in a sandboxDo not treat a preview pack as a proven local install.
GitHub snapshot21k stars2.3k forks 路 295 contributors
Preview status 路 2026-05-16
What is peft?
- PEFT (Parameter-Efficient Fine-Tuning) is a Python library developed by Hugging Face that provides efficient methods for fine-tuning pre-trained models while keeping most model parameters ...
- Best fit: Users who want source-backed project understanding before installing it.
- Capability added to an AI workflow: prompt, recipe, host_instruction, eval, preflight
- Evidence base: https://github.com/huggingface/peft, https://github.com/huggingface/peft, https://github.com/huggingface/peft#readme
- Preview pages are noindex until English quality, canonical, and citation gates pass.
- peft still needs sandbox verification before production use.
01
Quick decision
Use this section to decide whether the project is worth a deeper read.馃 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
21k stars 路 2.3k forks
02
What it can do
Translate the upstream project into concrete capabilities the user can judge before installing.Introduction to PEFT
Related topics: Installation Guide, System Architecture, LoRA and LoRA Variants
Sources: [src/peft/tuners/lora/model.py:1-50]()
Installation Guide
Related topics: Introduction to PEFT, Quantization Integration
Sources: [pyproject.toml](https://github.com/huggingface/peft/blob/main/pyproject.toml)
System Architecture
Related topics: Core Components, Introduction to PEFT, Configuration System
Sources: [src/peft/peft_model.py:1-100]()
Core Components
Related topics: System Architecture, Configuration System, Model Loading and Saving
Sources: [src/peft/peft_model.py:1-50]()
LoRA and LoRA Variants
Related topics: Other PEFT Methods, Quantization Integration, Configuration System
Sources: [src/peft/tuners/lora/model.py:1-100]()
Sources: https://github.com/huggingface/peft, Human Manual, Project Pack evidence, and downstream validation signals.
03
Community Discussion Evidence
Project-level external discussion stays visible on the detail page, not only inside the manual.Community Discussion Evidence
12 source-linked itemsReview these external discussions before using peft with real data or production workflows. They are review inputs, not standalone proof that the project is production-ready.
-
01
Feature Request: Improve offline support for custom architectures in get
github / github_issue
-
02
Applying Dora to o_proj of Meta-Llama-3.1-8B results in NaN
github / github_issue
-
03
Comparison of Different Fine-Tuning Techniques for Conversational AI
github / github_issue
-
04
[BUG] peft 0.19 target_modules (str) use `set`
github / github_issue
-
05
v0.19.1
github / github_release
-
06
v0.19.0
github / github_release
-
07
0.18.1
github / github_release
-
08
0.18.0: RoAd, ALoRA, Arrow, WaveFT, DeLoRA, OSF, and more
github / github_release
-
09
0.17.1
github / github_release
-
10
0.17.0: SHiRA, MiSS, LoRA for MoE, and more
github / github_release
-
11
0.16.0: LoRA-FA, RandLoRA, C鲁A, and much more
github / github_release
-
12
v0.15.2
github / github_release
04
How to start
Only source-backed commands are shown here. Verify them in an isolated environment first.Try the prompt first
Test the workflow without installing the upstream project.
previewRead the Human Manual
Understand inputs, outputs, limits, and failure modes.
manualTake context to your AI host
Use the compiled assets in your preferred AI environment.
contextRun sandbox verification
Confirm install commands and rollback before using a primary environment.
verifypip install peftOfficial start command 路 https://github.com/huggingface/peft#readme 路 verified: yes
05
Human Manual
The English page must expose the real manual, not a short placeholder.8+ sections 路 Human Manual
peft Manual
PEFT (Parameter-Efficient Fine-Tuning) is a Python library developed by Hugging Face that provides efficient methods for fine-tuning pre-trained models while keeping most model parameters ...
Open the full manual- peft Human Manual
- Table of Contents
- Introduction to PEFT
- Related Pages
- Overview
- Core Architecture
- Design Philosophy
- Component Hierarchy
Introduction to PEFT
Related topics: Installation Guide, System Architecture, LoRA and LoRA Variants
Sources: [src/peft/tuners/lora/model.py:1-50]()
Installation Guide
Related topics: Introduction to PEFT, Quantization Integration
Sources: [pyproject.toml](https://github.com/huggingface/peft/blob/main/pyproject.toml)
System Architecture
Related topics: Core Components, Introduction to PEFT, Configuration System
Sources: [src/peft/peft_model.py:1-100]()
Core Components
Related topics: System Architecture, Configuration System, Model Loading and Saving
Sources: [src/peft/peft_model.py:1-50]()
LoRA and LoRA Variants
Related topics: Other PEFT Methods, Quantization Integration, Configuration System
Sources: [src/peft/tuners/lora/model.py:1-100]()
06
AI Context Pack and portable assets
After deciding to continue, take the project context into your own AI host.Complete pack plus user-owned assets
These files are planning and verification assets for Claude Code, Codex, Gemini, Cursor, ChatGPT, and other AI hosts.
07
Preflight checks
Treat this preview as a planning asset, not proof that your local environment is ready.- The manual is generated from source-linked project files and Doramagic validation signals.
- Community evidence warnings stay visible instead of being converted into marketing claims.
- The preview remains noindex until English quality and reciprocal indexing gates are explicitly opened.
- Use the upstream repository as the final authority for installation commands, license, and version-specific behavior.
08
Pitfall Log and verification risks
Doramagic surfaces high-risk items before users treat a candidate capability as verified.Review upstream issue
The source signal needs review before production use.
Review upstream issue
The source signal needs review before production use.
Review upstream issue
The source signal needs review before production use.
Review upstream issue
The source signal needs review before production use.
Review upstream issue
The source signal needs review before production use.
Review upstream issue
README/documentation is current enough for a first validation pass.
Review upstream issue
The source signal needs review before production use.
Review upstream issue
The source signal needs review before production use.