Engineering metrics glossary
Plain-language definitions for delivery and flow metrics—especially those you see around DORA, pull requests, and engineering analytics—with honest caveats about measurement and misuse.
6 terms · Updated reference, not benchmarks.
These pages explain common engineering and delivery metrics in plain language. Definitions vary by company, toolchain, and industry; we highlight typical usage and caveats. Nothing here is legal, financial, or professional advice, and it is not a substitute for judgment in your own context.
Metrics can be misused for surveillance or stack ranking. We do not recommend using them that way. DORA performance bands from research are contextual—not targets for individuals or hiring decisions.
Delivery and DORA
Flow and pull requests
Frequently asked questions
Why do two tools show different cycle times for the same team?
Cycle time and lead time depend on chosen start and stop events (board state, first commit, merge, deploy). Vendors use different defaults. Align on definitions before comparing numbers.
Are DORA “elite” thresholds goals we should hit?
DORA publishes research distributions across respondents; they describe industry snapshots, not universal targets. Regulatory, mobile, and enterprise contexts often look different from high-velocity SaaS web apps.
Should we use these metrics for performance reviews?
System metrics reflect workflow, priorities, and dependencies—not individual worth. If you use delivery data at all in reviews, pair it with qualitative context and avoid single-metric judgments.
Where can I learn more about developer productivity beyond delivery metrics?
See our article on developer productivity metrics for frameworks like SPACE and how delivery signals fit alongside satisfaction and quality.
Developer productivity metrics: what engineering managers should track