DSPy (Programmatic Prompt Engineering)

DSPy: a Python framework for building LLM programs as composable Modules with declarative Signatures. 14 teleprompter (optimizer) classes auto-compile prompts and few-shot demos from train + dev sets. LM access is unified via LiteLLM; 2-tier cache (LRU + diskcache).

✓ 0 reported success·v0.1.0·

Overview

DSPy is a Python framework for building LLM programs as composable Modules with declarative Signatures (github.com/stanfordnlp/dspy). Pluggable Adapters format messages and parse responses; the LM client layer wraps LiteLLM for unified provider access; 14 teleprompter (optimizer) classes auto-compile prompts and few-shot demos from train + dev sets. Underneath sits a 2-tier cache (LRUCache memory + diskcache FanoutCache disk) and a three-layer telemetry system (Settings.trace, Module.history, usage_tracker). Settings is a process-wide singleton with thread-local overrides via ContextVar. This skill embeds 44 constraints (8 fatal) covering typical pitfalls: default Cache(restrict_pickle=False) + diskcache pickle.load on a poisoned ~/.dspy_cache shard equals arbitrary code execution (with NO user opt-in), MIPROv2 prints LM-call count but does NOT abort on budget overrun (silent runaway cost), and BootstrapFewShot accepts ANY truthy metric scalar (including 0.51) when metric_threshold is None (the default). The host AI applies these automatically.

Blueprint Source

finance-bp-137

stanfordnlp/dspyda4ae191 source file

Constraints

8total
8fatal
8 must-not-violate

Evidence Quality

Confidence90%

High confidence — strong evidence base

8 non-negotiable constraints

FATALdomain_ruledspy-C-001

WHENWhen configuring DSPy in any production / multi-tenant / shared-CI environment that points DSPY_CACHEDIR (or default ~/.dspy_cache) at a writable shared location

ACTIONcall dspy.configure_cache(restrict_pickle=True) (and register safe_types as needed) so the global Cache instance routes diskcache reads through the restricted unpickler in dspy/clients/disk_serialization.py

CONSEQUENCEDefault Cache(restrict_pickle=False) at clients/__init__.py:88 routes Cache.get() through diskcache pickle.load WITHOUT a restricted unpickler; a poisoned ~/.dspy_cache shard (CI shared volume, dependency confusion, multi-tenant host) triggers arbitrary code execution at fetch time with NO user opt-in

FATALclaim_boundarydspy-C-002

WHENWhen writing tutorials / SKILL configs / setup scripts that load saved DSPy programs or memory caches (BaseModule.load / dspy.load / Settings.load / Cache.load_memory_cache)

ACTIONrecommend or default-set allow_pickle=True without documenting source provenance verification — every public .load(...) defaults allow_pickle=False; tutorials that flip the default normalize disabling the framework-side gate

CONSEQUENCEBaseModule.load:268-271, dspy.load saving.py:39-40, Settings.load:298-315, Cache.load_memory_cache:201-206 each gate cloudpickle.load behind allow_pickle=False default. A tutorial-recommended allow_pickle=True silently turns those gates into theater, enabling RCE via attacker-supplied .pkl bundles

FATALoperational_lessondspy-C-003

WHENWhen launching MIPROv2 (or any prompt-optimization run) on a paid LM provider

ACTIONcompute the cost ceiling explicitly from num_candidates, num_trials, num_predictors, and valset size BEFORE calling teleprompter.compile() — MIPROv2._estimate_lm_calls only PRINTS the estimate; there is no max_total_calls knob

CONSEQUENCEMisconfigured auto='heavy' on 18 candidates × 10 predictors × 1000 valset can burn hundreds of dollars silently in 20 minutes — _estimate_lm_calls at mipro_optimizer_v2.py:355-401 only prints ANSI-colored estimates and returns strings; there is no raise / no abort if estimated > budget

FAQ

Discussion (0)

No comments yet. Be the first to share!

Changelog

v0.1.02026-04-25·Contributors: tangweigang-jpg

v0.1.0: Initial release on Doramagic.ai. LLM program optimization framework on stanfordnlp/dspy with bilingual metadata, 44 anti-pattern constraints (8 fatal), and 3 FAQs.

v0.1.02026-04-25·Contributors: tangweigang-jpg

v0.1.0: Initial release on Doramagic.ai. LLM program optimization framework on stanfordnlp/dspy with bilingual metadata, 44 anti-pattern constraints (8 fatal), and 3 FAQs.