Lead time for changes — the elapsed time from code commit to running in production — is the DORA metric most directly under engineering's control. Unlike change failure rate (which depends on product decisions and architectural history) or MTTR (which depends on incident response culture), lead time is shaped almost entirely by workflow choices that engineering teams make every week.
Yet most teams that measure lead time and find it too high respond by investing in the wrong thing. They speed up CI pipelines when the bottleneck is PR review wait time. They reduce PR size when the bottleneck is a mandatory approval gate that only one person can action. Before spending engineering cycles on optimization, you need to know where your time is actually going.
Step One: Decompose Your Lead Time
Lead time is not a single number — it's the sum of several distinct stages, each with different root causes. Before intervening, measure each stage separately:
Lead time stages
- Coding time — first commit to PR opened. Reflects task complexity and whether engineers are context- switching frequently.
- Review wait time — PR opened to first reviewer comment. Pure queue latency. Often the largest contributor in teams without review SLAs.
- Review cycle time — first review to PR approval. Reflects PR quality, reviewer depth expectations, and back-and-forth on feedback.
- Merge to deploy — PR merged to code live in production. Includes CI pipeline, any manual approval gates, deployment queue, and release coordination overhead.
For most teams, one stage dominates. Find it by pulling the median and 75th-percentile duration for each stage over the last 30 days. The stage with the highest 75th percentile is almost always your bottleneck — the median will look acceptable because it excludes the long tail of PRs that got stuck.
Root Cause 1: Batch Size
Large PRs take longer to review, generate more review back-and-forth, and are harder to deploy safely. If your median PR is over 400 lines of diff, batch size is almost certainly contributing to lead time even if your review culture is healthy.
The intervention is cultural, not technical. Engineers need to understand that smaller PRs are not laziness — they're engineering discipline. A useful framing: a 200-line PR that ships Monday and unblocks three other engineers is more valuable than a 600-line PR that sits in review for two days.
Tactics for reducing batch size: introduce a team PR size target (e.g., under 300 lines of non-generated code, excluding test files), practice stacked PRs where a feature is broken into a chain of small reviewable units, and use feature flags to ship code incrementally before it's ready for users.
Before and after: batch size reduction
A payments team averaging 750-line PRs and 3.5-day lead time adopted a 300-line PR target and stacked PR workflow for large features. After 60 days, median PR size dropped to 280 lines, review wait time fell from 18 hours to 6 hours (reviewers could engage faster with smaller scope), and lead time dropped to 1.8 days — a 49% reduction without any CI investment.
Root Cause 2: PR Review Queue
In most engineering teams, review wait time — the gap between PR open and first reviewer comment — is the largest single component of lead time. Engineers open a PR and context-switch to the next task while waiting. Reviewers are deep in their own work and don't see the review request for hours. This is entirely a coordination problem, not a technical one.
The most effective intervention is a team-level review SLA combined with a visible review queue. Teams that adopt a norm of "review any PR marked ready-for-review within two hours during working hours" typically see review wait time drop by 60-70% within two weeks. The norm needs to be visible and tracked — a weekly metric showing team average review wait time creates accountability without requiring managers to police individual behavior.
CODEOWNERS files help by routing reviews automatically to the right people, but they also create single-point-of-failure bottlenecks if only one person owns a critical path. Audit your CODEOWNERS rules for files that have only one owner — those paths will spike your lead time whenever that person is out or context-switched.
Root Cause 3: Branch Strategy
Long-lived branches are a hidden lead time tax. If your team maintains feature branches for more than two or three days before merging, you're accumulating merge complexity that shows up as review cycle time (reviewers have to understand a large changeset that diverged from main) and merge-to-deploy time (integration testing after a large merge).
Trunk-based development — merging directly to main with feature flags to control rollout — is the branching strategy most correlated with elite DORA performance. It forces small batch sizes, eliminates merge complexity, and compresses the feedback loop between writing code and seeing it in production.
If your team isn't ready for trunk-based development, a short-lived branch policy is a meaningful step: any branch more than two days old automatically gets a review request pinged in Slack. This surfaces stalled work before it compounds into a large, risky merge.
Root Cause 4: CI Pipeline Performance
A 30-minute CI pipeline that runs on every commit creates two problems: engineers stop running CI locally (too slow to iterate on), and the merge-to-deploy window is dominated by pipeline time rather than useful work. If your CI pipeline exceeds 15 minutes, it's worth a quarter of dedicated platform engineering investment.
High-impact CI optimizations in order of effort: run tests in parallel (most test suites are trivially parallelizable), implement build caching at both the dependency and compilation layer, identify and quarantine flaky tests (a 5% flaky test rate doubles effective CI time through reruns), and split the pipeline into a fast path for PRs (unit tests + lint, under 5 minutes) and a full path for main branch merges.
Measuring CI impact on lead time
Track merge-to-deploy time separately from PR review time. If CI pipeline duration accounts for more than 40% of your merge-to-deploy time, CI optimization will move the needle. If CI is 10 minutes but deployment queue wait time is 4 hours (because deployments are batched to nightly releases), CI optimization is wasted effort — fix the deployment frequency first.
Root Cause 5: Deployment Gates
Manual deployment approval gates are the most common hidden lead time killer in enterprise engineering teams. A gate that requires a release manager sign-off turns a 2-hour merge-to-deploy into a 24-hour one — not because the sign-off takes 24 hours, but because it can only happen at scheduled release windows.
Audit every manual step in your deployment pipeline and ask: what risk is this gate preventing, and is there a way to prevent that risk without requiring a human approval? Common gates that can be automated: security scan approval (run SAST/SCA automatically and block on critical findings, not on human review), staging environment validation (automated smoke tests replace manual QA sign-off), and change advisory board approval (replace with pre-approved change types for low-risk deployments).
Not all gates can or should be removed. Production deployments to regulated environments may require human sign-off by policy. In those cases, minimize gate latency by making the approval asynchronous and mobile-friendly, setting SLAs for approvers, and ensuring the gate information (test results, diff summary, risk assessment) is surfaced in the approval request so the approver can act without context-switching.
Measuring Improvement
Lead time improvements take two to four weeks to show up in your metrics because the metric is calculated on merged PRs, not opened ones. If you implement a review SLA today, you won't see the lead time number drop until the PRs affected by that norm complete their lifecycle.
Measure leading indicators weekly: median review wait time, percentage of PRs reviewed within SLA, CI pipeline p50 and p95 duration, and the count of PRs that are more than 48 hours old without approval. These numbers respond within days to interventions and predict lead time outcomes two to three weeks in advance.
Improving lead time is a multi-quarter effort. Expect quick wins from review culture changes in weeks one through four, CI optimization wins in weeks four through twelve, and branch strategy and deployment gate changes to fully pay off in months three through six. Run one intervention at a time so you can measure which changes are actually driving improvement.