Engineering Metrics for Azure DevOps Teams: Beyond Build Success Rates
Azure DevOps comes with a capable built-in analytics layer. It gives you build pass rates, pipeline durations, work item cycle time, and sprint velocity. Here is what it does not give you — and why the missing metrics are often the ones engineering leaders need most.
What Azure DevOps Analytics Actually Provides
Azure DevOps Analytics is the built-in data layer that powers the Analytics widgets in dashboards, the Power BI connector, and the OData feed. It is genuinely useful for teams that want visibility into their CI/CD pipelines and Boards workflows without standing up additional tooling.
The native analytics coverage falls into three categories aligned with the three main Azure DevOps service areas:
Pipeline & Build
- Build pass/fail rate by pipeline
- Pipeline run duration trends
- Stage-level failure analysis
- Queue time before agent pickup
- Test result pass rates (with Test Plans)
Repos
- Pull request count by repository
- Commit activity by author
- Branch policy compliance rate
- PR completion time (rough average)
- Code churn line counts
Boards
- Work item cycle time (Active → Closed)
- Sprint velocity (story points completed)
- Backlog aging and lead time by work type
- Bug resolution time
- Cumulative flow diagrams
These are solid metrics for day-to-day pipeline monitoring and sprint tracking. For engineering managers who need to understand the health of their delivery process and report on it to leadership, they are often not enough.
What Azure DevOps Analytics Does Not Provide
The gaps in Azure DevOps Analytics follow a consistent pattern: native analytics is excellent within each service area (Repos, Pipelines, Boards) but weak at cross-service joins and at higher-order engineering health metrics. Here is the full list of what is missing:
Azure Analytics has pipeline run counts, but not production-only deployment frequency filtered by environment. Build counts inflate the number 10-50x.
Azure Analytics has work item cycle time and pipeline duration separately. It cannot join commit timestamps from Repos to release completion timestamps from Pipelines across OData entity sets.
Pipeline failure rates count transient infrastructure failures and flaky tests. True CFR requires correlating failed deployments with incident data — which Azure Analytics does not do.
Azure Boards work item state transitions are a proxy, not a direct measure. Most teams track incidents in PagerDuty, OpsGenie, or incident.io — none of which Azure Analytics connects to.
PR completion time is available in aggregate, but time-to-first-review, review duration, and approval-to-merge lag are not broken out separately. You cannot identify whether your bottleneck is waiting for a reviewer or waiting after approval.
Azure DevOps has no pre-merge risk signal. There is no native metric for change entropy, author file expertise, blast radius, or DDL migration detection on a per-PR basis.
No native view of reviewer load distribution, PR size trends, after-hours work patterns, or rework rate. Engineering managers cannot identify sustainable delivery pace from native analytics.
Why Build Success Rate Is Not an Engineering Health Metric
Build success rate is the Azure DevOps metric most commonly presented to engineering leadership. It is easy to visualize, easy to understand, and easy to improve — and it is a poor proxy for engineering delivery health.
A team with a 95% build success rate might be shipping to production once a month, accumulating large PR batches that each take 4 days to review, and spending 40% of incident response time on deployments that failed due to preventable risk patterns. The build success rate tells you none of this.
Build success rate measures pipeline reliability, not delivery performance. It answers "are our CI scripts working?" not "are we delivering software effectively?" The DORA metrics answer the second question, which is why engineering leaders who want to understand and improve delivery performance need to go beyond native Azure DevOps Analytics.
PR Review Time: The Hidden Delivery Bottleneck
For most engineering teams, the biggest opportunity to reduce Lead Time for Changes is in code review. Time-to-first-review — how long a PR sits before anyone looks at it — is typically the largest single contributor to lead time, often accounting for 40-60% of the total time from commit to production.
Azure DevOps shows PR completion time as a rough average in the pull requests analytics view. It does not break this down into:
- Time to first review (how long PRs wait for initial attention)
- Active review duration (how long reviewers spend in the PR)
- Time from approval to merge (how long approved PRs wait in the queue)
- Reviewer load distribution (which team members are review bottlenecks)
Without this breakdown, engineering managers cannot diagnose where the bottleneck is. A team with a 3-day average PR time might have a 2.5-day wait for first review (the fix: better reviewer rotation and notification policies) or a 30-minute review with a 2.5-day wait after approval (the fix: merge queue automation or clearer policies on who can merge).
Deployment Risk: The Metric Azure DevOps Cannot Calculate
Deployment risk — a pre-merge signal predicting the probability that a given PR will cause a production incident — is not available in any native Azure DevOps analytics view, and it cannot be approximated from the metrics that are available.
Deploy risk requires analyzing signals from the PR itself: change entropy (how scattered is the diff across files and modules), author file expertise (has this author touched these files before), blast radius (how many downstream services depend on the changed code), and structural indicators like DDL migrations, API surface changes, and dependency updates.
Azure DevOps does not surface any of these signals natively. Build pass/fail tells you if the CI checks passed — not whether the change that passed CI has a high probability of causing a production incident. Teams that want predictive deploy risk need a layer on top of Azure DevOps that understands the content and history of each PR.
Team Health Metrics Azure DevOps Does Not Track
Engineering team health — sustainable pace, workload distribution, rework rate, and focus time — is not measurable from Azure DevOps Analytics. The native data exists (commit timestamps, PR author data, work item assignment history) but it is not aggregated into health indicators.
Specifically, engineering managers cannot answer these questions from native Azure DevOps Analytics without building custom reports:
- Which engineers are reviewing significantly more PRs than they are authoring?
- What percentage of our merged code required rework within 24 hours (a proxy for rushed reviews or insufficient testing)?
- Are after-hours commits increasing over time — a leading indicator of team burnout?
- How is PR size trending? Large PRs correlate with slower review times and higher change failure rates.
Koalr as the Missing Metrics Layer for Azure DevOps
Koalr connects to Azure Repos, Azure Pipelines, and Azure Boards through a single OAuth connection and adds the metrics layer that Azure DevOps Analytics does not provide. The integration backfills 90 days of history on first connection, so you have trend data immediately rather than starting from zero.
What Koalr adds on top of your existing Azure DevOps data:
- Accurate DORA metrics — Deployment Frequency from production releases only, Lead Time from commit to production release, Change Failure Rate with rollback detection, and MTTR from your incident tool of choice.
- PR review time breakdown — Time to first review, review duration, and approval-to-merge lag tracked per team and per reviewer. Identifies bottlenecks specific enough to act on.
- Deploy risk scoring — A 0-100 pre-merge risk score on every PR based on change entropy, file churn, author expertise, and structural signals. Posted as a pipeline check so it appears inline in the PR without a dashboard visit.
- Team health indicators — Reviewer load distribution, rework rate, PR size trends, and after-hours work patterns tracked over time with benchmark comparisons.
For a detailed breakdown of how to calculate each DORA metric from Azure DevOps APIs, see the complete Azure DevOps DORA metrics setup guide. For teams evaluating alternative analytics tools, the Azure DevOps analytics alternatives comparison covers five options in detail.
Get the engineering metrics Azure DevOps does not provide
DORA metrics, PR review time breakdown, deploy risk scoring, and team health indicators — all connected to your existing Azure DevOps organization. Free trial, no credit card, 90-day history backfilled on day one.