DSPy (Programmatic Prompt Engineering)
DSPy: a Python framework for building LLM programs as composable Modules with declarative Signatures. 14 teleprompter (optimizer) classes auto-compile prompts and few-shot demos from train + dev sets. LM access is unified via LiteLLM; 2-tier cache (LRU + diskcache).
Overview
Constraints
Evidence Quality
High confidence — strong evidence base
8 non-negotiable constraints
WHENWhen configuring DSPy in any production / multi-tenant / shared-CI environment that points DSPY_CACHEDIR (or default ~/.dspy_cache) at a writable shared location
ACTIONcall dspy.configure_cache(restrict_pickle=True) (and register safe_types as needed) so the global Cache instance routes diskcache reads through the restricted unpickler in dspy/clients/disk_serialization.py
CONSEQUENCEDefault Cache(restrict_pickle=False) at clients/__init__.py:88 routes Cache.get() through diskcache pickle.load WITHOUT a restricted unpickler; a poisoned ~/.dspy_cache shard (CI shared volume, dependency confusion, multi-tenant host) triggers arbitrary code execution at fetch time with NO user opt-in
WHENWhen writing tutorials / SKILL configs / setup scripts that load saved DSPy programs or memory caches (BaseModule.load / dspy.load / Settings.load / Cache.load_memory_cache)
ACTIONrecommend or default-set allow_pickle=True without documenting source provenance verification — every public .load(...) defaults allow_pickle=False; tutorials that flip the default normalize disabling the framework-side gate
CONSEQUENCEBaseModule.load:268-271, dspy.load saving.py:39-40, Settings.load:298-315, Cache.load_memory_cache:201-206 each gate cloudpickle.load behind allow_pickle=False default. A tutorial-recommended allow_pickle=True silently turns those gates into theater, enabling RCE via attacker-supplied .pkl bundles
WHENWhen launching MIPROv2 (or any prompt-optimization run) on a paid LM provider
ACTIONcompute the cost ceiling explicitly from num_candidates, num_trials, num_predictors, and valset size BEFORE calling teleprompter.compile() — MIPROv2._estimate_lm_calls only PRINTS the estimate; there is no max_total_calls knob
CONSEQUENCEMisconfigured auto='heavy' on 18 candidates × 10 predictors × 1000 valset can burn hundreds of dollars silently in 20 minutes — _estimate_lm_calls at mipro_optimizer_v2.py:355-401 only prints ANSI-colored estimates and returns strings; there is no raise / no abort if estimated > budget
FAQ
Discussion (0)
No comments yet. Be the first to share!
Changelog
v0.1.0: Initial release on Doramagic.ai. LLM program optimization framework on stanfordnlp/dspy with bilingual metadata, 44 anti-pattern constraints (8 fatal), and 3 FAQs.
v0.1.0: Initial release on Doramagic.ai. LLM program optimization framework on stanfordnlp/dspy with bilingual metadata, 44 anti-pattern constraints (8 fatal), and 3 FAQs.