Open Source · Agent Framework Review
DSPy
Stanford's framework for algorithmically optimising LLM prompts and pipelines. Rather than hand-writing prompts, DSPy compiles declarative programs into optimised prompt chains using automatic few-shot generation and bootstrapping.
Editorial Score
3/5
3/5
Editorial score
Latest: v3.1.3
// Our Verdict
A fundamentally different approach to agent and pipeline design — compiling programs rather than writing prompts. High ceiling for teams willing to invest in the paradigm shift; steep learning curve for those expecting a conventional framework.
Best for: Research teams and ML engineers who want to optimise prompt pipelines systematically rather than hand-tune prompts
// Strengths
Automatic prompt optimisation — the compiler finds better prompts than hand-writing
Declarative pipeline definition separates logic from prompting
Strong academic backing and active Stanford research group
Multi-provider model support
Production-proven at Dropbox for LLM judge optimisation — measurable iteration speed gains
Enables model comparison with measurable evidence rather than manual trial and error
// Weaknesses
Paradigm shift required — not intuitive for teams used to conventional agent frameworks
Python-only; multi-language teams must stand up a separate repo to use it
No native MCP support
Requires building an evaluation dataset upfront — high barrier before optimisation delivers value
Optimisation runs are compute-intensive and expensive at scale
Not suited to interactive or real-time agent tasks; optimised for offline pipeline tuning
Widespread confusion in practitioner communities about the core value proposition reduces adoption
// Agentic AI Audit
NOT SURE IF DSPY
FITS YOUR STACK?
FITS YOUR STACK?
We map your agent system requirements, evaluate which framework fits your constraints, and give you a prioritised build plan. No fluff. Just a clear stack decision with rationale.