How to Calculate DORA Metrics from Azure DevOps (Complete Guide)
Azure DevOps gives you Repos, Pipelines, and Boards in one platform — but calculating azure devops dora metrics across all three is not as straightforward as it looks. This guide walks through the exact API calls, correlation logic, and common pitfalls for measuring all four DORA metrics from your Azure DevOps organization.
What this guide covers
The four DORA metrics in Azure DevOps context, exact REST API endpoints for Releases and Pipelines, commit-to-release lead time correlation, what the native Azure DevOps Analytics views cannot do, how to link Boards work items to pipeline runs, DORA benchmark tables, and the measurement mistakes that corrupt your numbers.
Why DORA Metrics Matter for Azure DevOps Teams
Azure DevOps is the only major platform that bundles source control (Repos), CI/CD (Pipelines), and work item tracking (Boards) under a single authentication boundary and a unified REST API. That architectural advantage should make DORA measurement easier than on fragmented stacks — and in theory it does. The data is all there.
In practice, DORA metrics cut across all three Azure DevOps services simultaneously. Deployment Frequency lives in Pipelines Releases. Lead Time requires correlating Repos commit timestamps with Pipeline release completion timestamps. Change Failure Rate needs you to classify failed pipeline runs and separate them from rollbacks. MTTR requires incident data that Azure DevOps Boards can proxy — but only if your team actually logs incidents as work items there.
The built-in Azure DevOps Analytics service (the one that powers the Power BI connector and the Analytics widgets in dashboards) does not do this correlation automatically. It surfaces pipeline-level metrics in one view and Boards-level metrics in another, but never joins them into the four DORA numbers that DORA research actually measures. More on that gap below. First, the metrics.
The Four DORA Metrics in Azure DevOps Context
Each of the four DORA metrics maps to a specific data source inside Azure DevOps. The mapping determines which API you query and which fields you use for your timestamps.
Deployment Frequency
Deployment Frequency measures how often your team successfully deploys to production. In Azure DevOps, the authoritative data source is the Releases API — specifically the deployments endpoint, filtered to your production environment and a deploymentStatus=succeeded filter. Do not use build run counts from the Builds API: builds include every CI run, not just production deployments. The distinction matters enormously for teams that run dozens of builds per day but deploy to production once.
Lead Time for Changes
Lead Time measures the elapsed time from a code commit entering the main branch to that commit being live in a successful production release. In Azure DevOps, this means correlating the committerDate from the Repos Commits API with the completedOn timestamp from the Releases API. The commit SHA is your correlation key — the release artifact contains the commit range it was built from.
Change Failure Rate
Change Failure Rate (CFR) is the percentage of production deployments that result in a degradation requiring a hotfix, rollback, or incident. In Azure DevOps Pipelines, you can approximate this by counting releases with deploymentStatus=failed as a fraction of all deployments — but this undercounts failures that result in a separate hotfix release rather than a pipeline failure. A more complete approach combines failed pipeline statuses with rollback releases identified by naming conventions.
Mean Time to Restore (MTTR)
MTTR measures how long it takes to restore service after a production incident. Azure DevOps Boards can serve as your incident log if your team creates work items for production incidents. The calculation is the delta between the incident work item's creation time (System.CreatedDate) and the next successful production release timestamp. For teams using a dedicated incident tool (PagerDuty, OpsGenie, incident.io), that tool is the more reliable MTTR source.
Step-by-Step: Calculating Each Metric from Azure DevOps APIs
All Azure DevOps REST API calls use the same base URL pattern and require a Personal Access Token (PAT) or Azure AD token with appropriate scopes. The Release Management API lives on a separate subdomain from the core API.
Deployment Frequency: Releases API
Query the deployments endpoint for your release definition, filtered to your production stage and succeeded status. Replace {org}, {project}, and {definitionId} with your values.
GET https://vsrm.dev.azure.com/{org}/{project}/_apis/release/deployments
?definitionId={id}
&deploymentStatus=succeeded
&minStartedTime=2026-02-01T00:00:00Z
&maxStartedTime=2026-03-01T00:00:00Z
&api-version=7.1
# Response fields to extract:
# deployments[].completedOn → deployment timestamp
# deployments[].id → unique deployment ID
# deployments[].releaseEnvironment.name → "Production" (verify filter)Count the number of returned deployments and divide by your lookback window in days. For a 30-day window, a count of 45 gives you 1.5 deployments per day. Use a rolling 28-day window for week-over-week comparisons — calendar months introduce variance from month length differences.
Lead Time for Changes: Correlating Commits with Releases
Each Azure Pipelines release artifact contains the commit range that was built. The correlation flow is: fetch recent commits from Repos, fetch the releases that deployed in the same window, then match commits to the release that first included them.
# Step 1: Fetch commits merged to main
GET https://dev.azure.com/{org}/{project}/_apis/git/repositories/{repoId}/commits
?searchCriteria.itemVersion.versionType=branch
&searchCriteria.itemVersion.version=main
&searchCriteria.fromDate=2026-02-01T00:00:00Z
&api-version=7.1
# Key fields:
# commits[].commitId → SHA
# commits[].committer.date → when commit was pushed to main
# Step 2: Fetch release artifacts to find commit range
GET https://vsrm.dev.azure.com/{org}/{project}/_apis/release/releases/{releaseId}
?api-version=7.1
# Key fields:
# artifacts[].definitionReference.sourceVersion.id → last commit SHA in build
# completedOn → release completion timestamp
# Lead time for a commit =
# release.completedOn − commit.committer.dateUse the median across all matched commits — not the mean. Lead time distributions are right-skewed: a single large refactor with a slow review cycle can inflate the mean by hours while the median stays representative of typical delivery speed.
Change Failure Rate: Failed and Rollback Releases
CFR requires two queries: one for total successful deployments (your denominator) and one for failed or rollback deployments (your numerator).
# Query failed deployments
GET https://vsrm.dev.azure.com/{org}/{project}/_apis/release/deployments
?definitionId={id}
&deploymentStatus=failed
&minStartedTime=2026-02-01T00:00:00Z
&api-version=7.1
# For rollback detection — filter releases by name convention:
GET https://vsrm.dev.azure.com/{org}/{project}/_apis/release/releases
?definitionId={id}
&searchText=rollback
&api-version=7.1
# CFR = (failed_count + rollback_count) / total_production_deployments * 100The deploymentStatus=failed filter catches pipeline-level failures. It does not catch deployments that technically succeeded at the pipeline level but caused a production incident. For full CFR accuracy, you need to correlate this with your incident data — see the Boards correlation section below.
MTTR: Incident Work Items to Next Successful Release
If your team tracks production incidents in Azure Boards, query for incident work items and find the delta to the next successful release.
# Query incident work items (adjust WorkItemType to match your process)
POST https://dev.azure.com/{org}/{project}/_apis/wit/wiql?api-version=7.1
Content-Type: application/json
{
"query": "SELECT [System.Id], [System.CreatedDate], [Microsoft.VSTS.Common.ResolvedDate]
FROM WorkItems
WHERE [System.WorkItemType] = 'Bug'
AND [System.Tags] CONTAINS 'Production-Incident'
AND [System.CreatedDate] >= '2026-02-01T00:00:00Z'"
}
# MTTR per incident =
# next_successful_release.completedOn − work_item.System.CreatedDate
# (where next_successful_release is the first release after the incident was created)The Azure DevOps DORA Gap: What Native Analytics Views Miss
Azure DevOps Analytics is genuinely useful — but it does not solve the DORA cross-tool correlation problem. Here is exactly what it can and cannot do:
| Capability | Azure Analytics | Custom API |
|---|---|---|
| Pipeline run counts and durations | Yes | Yes |
| Release deployment history | Yes | Yes |
| Work item cycle time (Boards only) | Yes | Yes |
| Commit-to-release lead time | No | Yes (with correlation logic) |
| CFR with rollback detection | No | Yes |
| MTTR from incident to next deploy | No | Yes |
| Cross-tool DORA in one view | No | Yes |
The core limitation is that Azure DevOps Analytics exposes each service area — Boards, Pipelines, Repos — through separate OData entity sets. The PipelineRuns entity does not join to the WorkItems entity. The GitCommits entity does not join to Releases. You can build individual Power BI reports per service area, but combining them into DORA numbers requires either custom DAX joins or a separate data pipeline that pulls from the REST APIs directly.
How to Wire Azure DevOps DORA with the Boards API
The most complete Lead Time calculation uses work item links, not just commit timestamps. Azure DevOps supports a link type called ArtifactLink that connects a work item to a specific commit via a Git Commit link. When your team links commits to work items (either manually or via branch policy enforcement), you can trace the full chain: work item created → commit pushed → pipeline triggered → release deployed to production.
# Fetch work item with its linked commits
GET https://dev.azure.com/{org}/{project}/_apis/wit/workitems/{id}
?$expand=relations
&api-version=7.1
# Look for relations with rel = "ArtifactLink" and attributes.name = "Fixed in Commit"
# The url field contains: vstfs:///Git/Commit/{projectId}/{repoId}/{commitId}
# Parse the commitId from the vstfs URL, then:
# Lead Time = release.completedOn − workItem.System.CreatedDate
# (for a more accurate "idea to production" metric than commit-to-release alone)This work-item-to-release lead time is what DORA research actually measures as "Lead Time for Changes" — the time from a developer starting work on a change to that change being live in production. Commit timestamp is a reasonable proxy, but it misses time spent in code review and staged rollouts.
DORA Benchmarks: How Does Your Azure DevOps Team Compare?
Once you have your four numbers, compare them against the DORA research performance tiers. These benchmarks come from the annual DORA State of DevOps report and are consistent across toolchain choices — the targets are the same whether you use Azure DevOps, GitHub, or GitLab.
| Metric | Elite | High | Medium | Low |
|---|---|---|---|---|
| Deployment Frequency | Multiple/day | Daily – weekly | Weekly – monthly | < Monthly |
| Lead Time for Changes | < 1 hour | 1 day – 1 week | 1 week – 1 month | > 6 months |
| Change Failure Rate | 0–5% | 5–10% | 10–15% | > 15% |
| MTTR | < 1 hour | < 1 day | 1 day – 1 week | > 6 months |
Most Azure DevOps teams land in the Medium tier on Deployment Frequency — often because Azure Pipelines Classic Release pipelines have manual approval gates on the production stage that introduce a human latency step. Teams that move to YAML pipelines with environment approvals (rather than Classic Release gates) consistently see Lead Time drop by 30–50% because the approval workflow is integrated into the pipeline run rather than requiring a separate manual trigger.
Common Mistakes Azure DevOps Teams Make Measuring DORA
The most damaging measurement errors are the ones that make your numbers look better than they are. Here are the four most common mistakes specific to Azure DevOps setups:
Counting build runs instead of deployments
The Azure DevOps Builds API (/_apis/build/builds) returns every CI build run — including feature branch builds, PR validation builds, and failed builds that never deployed anywhere. If you count these as "deployments," your Deployment Frequency will be 10–50× inflated depending on how active your CI is. Always query the Releases API with an explicit environment filter, not the Builds API.
Not filtering by environment stage
Azure Pipelines Release definitions typically have multiple stages: Dev, QA, Staging, Production. A deployment to Dev is not a production deployment. The Releases API's deployments endpoint accepts a definitionEnvironmentId parameter — use it to filter to your production stage ID specifically. Without this filter, every release to any environment inflates your Deployment Frequency and contaminates your Lead Time distribution with pre-production deployments.
Mixing manual and automated releases
Many Azure DevOps organizations have both automated pipeline deployments and manually triggered "hotfix" or "emergency" releases created directly in the Release UI. The manually triggered ones often bypass the artifact validation steps and do not have the same commit-range metadata. Filter by deploymentStatus=succeeded AND check the release.artifacts array length to ensure it is non-empty before including a release in your Lead Time calculation — manual releases without build artifacts cannot be correlated to commits.
Using work item cycle time as a lead time proxy
Azure DevOps Analytics does expose work item cycle time — the time from "Active" to "Closed" state. This is often confused with DORA Lead Time. They measure different things. Work item cycle time includes time waiting in code review and merge queue, but it ends when the work item is closed — which may happen before or after the code is actually in production. DORA Lead Time ends at the production release, not at work item closure.
Key takeaway on measurement
The four DORA metrics are simple to define but require careful data pipeline engineering to measure correctly from Azure DevOps. The most common failure mode is trusting a built-in widget that appears to show what you need but is actually measuring something adjacent — build counts, work item cycle time, or total pipeline stage deployments rather than production-only deploys.
How Koalr Automates Azure DevOps DORA Metrics
Koalr connects to Azure Repos, Azure Pipelines, and Azure Boards through a single OAuth connection and handles all of the correlation logic described above automatically. You do not need to build and maintain the API polling, the commit-to-release SHA matching, or the rollback detection heuristics.
Specifically, Koalr's Azure DevOps integration does the following out of the box:
- Deployment Frequency — pulls from the Releases API, filters to your production environment definition ID automatically, deduplicates re-run stages, and tracks per-service frequency when you have multiple release definitions.
- Lead Time — correlates
committerDatefrom Repos withcompletedOnfrom Releases using the artifact commit SHA, falls back to work item link traversal for teams with ArtifactLink enforcement enabled. - Change Failure Rate — combines failed deployment status with rollback release name matching and optional Boards incident tag correlation. Flags the 1-hour window after each release as the high-sensitivity CFR window.
- MTTR — integrates with Azure Boards incident work items or connects directly to PagerDuty, OpsGenie, and incident.io as your MTTR source, giving you a clean incident-to-resolution timestamp without relying on work item state transitions.
- Deploy risk scoring — adds a pre-deployment risk score before each release fires, based on commit entropy, file churn, author file expertise, and DDL migration detection. Teams that see a high-risk score on a Friday afternoon can make an informed hold decision before the pipeline reaches the production stage gate.
If your team is also evaluating whether to keep Azure DevOps Boards or migrate to Jira, the Opsgenie migration guide covers the tooling transition in detail — Koalr maintains DORA continuity through any incident tool or project tracker migration.
Get accurate Azure DevOps DORA metrics without building the pipeline yourself
Koalr connects to Azure Repos, Pipelines, and Boards in one OAuth step and calculates all four DORA metrics automatically — with correct environment filtering, commit correlation, and deploy risk scoring on every release.