How to Calculate DORA Metrics from GitLab CI/CD (Complete Guide)
GitLab bundles source control, CI/CD pipelines, and issue tracking in a single platform — which should make gitlab dora metrics trivially easy to compute. In practice, the built-in DORA report has significant gaps and the correct API correlation logic is less obvious than it appears. This guide covers the exact endpoints, timestamp fields, and common pitfalls for measuring all four DORA metrics from GitLab.
What this guide covers
The four DORA metrics mapped to GitLab data sources, exact REST and GraphQL API calls for each metric, what the GitLab Ultimate DORA report cannot do, how to wire GitLab Issues for full lead time traceability, DORA benchmark tables, and the four measurement mistakes that are most common on GitLab setups.
Why GitLab Is Uniquely Positioned for DORA — and Where It Still Falls Short
GitLab is the only major DevOps platform that ships source control, CI/CD pipelines, container registry, security scanning, and issue tracking as a single product under one authentication boundary. Every piece of DORA-relevant data — commits, merge requests, pipeline runs, environment deployments, and issues — is accessible through a unified API. In theory, that means you can compute all four DORA metrics with a handful of REST calls against a single host.
GitLab recognized this advantage and built a native DORA metrics dashboard. But the native report has a significant constraint: it is available only on GitLab Ultimate, which starts at $99 per user per month. Teams on GitLab Premium ($29/user/mo) or GitLab Free have no built-in DORA view at all. And even on Ultimate, the native DORA report has gaps: it does not ingest MTTR from PagerDuty, Opsgenie, or incident.io; it does not correlate code coverage drops from Codecov with change failure rate; and it does not break down metrics by team or sub-group in a way that maps cleanly to how most organizations structure their engineering teams.
The result is that most GitLab teams — on any tier — end up needing to query the APIs directly or use a third-party tool to get accurate DORA numbers. This guide shows you how to do the former. If you want to skip the data pipeline work entirely, the last section covers how Koalr handles it automatically.
The Four DORA Metrics Mapped to GitLab Data Sources
Each of the four DORA metrics maps to a specific GitLab API resource. The mapping determines which endpoint you query and which fields you use as your timestamps.
Deployment Frequency
Deployment Frequency counts how often your team successfully deploys to production. GitLab exposes this through the Deployments API. The correct endpoint filters by environment name (production by convention, but your environment name may differ) and deployment status (success). Count the returned deployments over your measurement window and divide by the number of days or weeks.
Do not count pipeline runs — CI pipelines run on every push, every merge request, and every scheduled trigger. Using pipeline counts as a proxy for Deployment Frequency will inflate your number by an order of magnitude or more depending on your branching strategy.
Lead Time for Changes
Lead Time measures the elapsed time from a code change entering the main branch to that change being live in a successful production deployment. In GitLab, this requires correlating the merge request merged_at timestamp (when the MR was merged to your default branch) with the deployment created_at timestamp for the first production deployment that included the same commit. The commit SHA is your correlation key.
Note that merged_at is the correct start timestamp — not the commit author date, not the MR created date, and not the pipeline start time. DORA Lead Time specifically measures the time from when a change is ready to ship (merged) to when it is actually in production.
Change Failure Rate
Change Failure Rate (CFR) is the percentage of production deployments that result in a degradation requiring remediation. GitLab's Deployments API exposes a status=failed filter, which gives you pipeline-level failures. For a more complete CFR calculation, you should also correlate with revert merge requests — identified by the title containing "Revert" or by GitLab's built-in revert tracking via the merge_request.reverts_merge_request_iid field.
Mean Time to Restore (MTTR)
MTTR measures how long it takes to restore service after a production incident. In GitLab, the most accurate approach is to calculate the delta between a failed deployment's created_at timestamp and the created_at timestamp of the next successful deployment to the same environment. For teams that use PagerDuty, Opsgenie, or incident.io as their incident management tool, using the incident's triggered and resolved timestamps gives more accurate MTTR than relying on deployment timestamps alone.
Step-by-Step API Examples
All GitLab REST API calls use a personal access token or OAuth token passed in the PRIVATE-TOKEN header. Replace 42 with your actual project ID and glpat-xxxx with a token that has read_api scope.
Deployment Frequency: Deployments Endpoint
Query the deployments endpoint for your production environment, filtering to successful deployments within your measurement window. Paginate with per_page=100 and follow the X-Next-Page header until it is empty.
curl "https://gitlab.example.com/api/v4/projects/42/deployments\
?environment=production\
&status=success\
&updated_after=2026-02-01T00:00:00Z\
&updated_before=2026-03-01T00:00:00Z\
&per_page=100" \
-H "PRIVATE-TOKEN: glpat-xxxx"
# Response fields to extract:
# [].id → unique deployment ID
# [].created_at → deployment timestamp (use this, not finished_at)
# [].environment.name → verify this is "production"
# [].status → "success"
# Deployment Frequency = count(results) / days_in_window
# Example: 45 deployments / 28 days = 1.6 deploys/dayUse a rolling 28-day window rather than calendar months to keep comparisons consistent — February has fewer days than March and introduces artificial variance in your week-over-week trend lines.
Lead Time: MR-to-Deploy Traceability via GraphQL
The REST API requires two separate calls and manual SHA matching to correlate merge requests with deployments. GitLab's GraphQL API makes this significantly easier — you can fetch both the MR mergedAt and the deployment's createdAt in a single query.
# GraphQL query: fetch recent MRs merged to main with their deployment info
POST https://gitlab.example.com/api/graphql
Authorization: Bearer glpat-xxxx
Content-Type: application/json
{
"query": "query($project: ID!, $after: String) {
project(fullPath: $project) {
mergeRequests(
targetBranch: \"main\"
state: merged
mergedAfter: \"2026-02-01T00:00:00Z\"
first: 50
after: $after
) {
pageInfo { endCursor hasNextPage }
nodes {
iid
title
mergedAt
mergeCommitSha
headPipeline {
deployments(environmentName: \"production\", status: SUCCESS, first: 1) {
nodes {
createdAt
status
}
}
}
}
}
}
}",
"variables": { "project": "my-group/my-project" }
}
# Lead time per MR =
# deployments.nodes[0].createdAt − mergedAt
# Use median across all MRs (distribution is right-skewed)Change Failure Rate: Failed Deployments and Revert MRs
CFR requires a numerator (failed or problematic deployments) and a denominator (total production deployments). Query both and combine.
# Step 1: Total successful production deployments (denominator)
curl "https://gitlab.example.com/api/v4/projects/42/deployments\
?environment=production&status=success\
&updated_after=2026-02-01T00:00:00Z&per_page=100" \
-H "PRIVATE-TOKEN: glpat-xxxx"
# Step 2: Failed production deployments
curl "https://gitlab.example.com/api/v4/projects/42/deployments\
?environment=production&status=failed\
&updated_after=2026-02-01T00:00:00Z&per_page=100" \
-H "PRIVATE-TOKEN: glpat-xxxx"
# Step 3: Revert MRs (merged reverts that reached production)
curl "https://gitlab.example.com/api/v4/projects/42/merge_requests\
?state=merged&target_branch=main\
&search=Revert&in=title\
&created_after=2026-02-01T00:00:00Z&per_page=100" \
-H "PRIVATE-TOKEN: glpat-xxxx"
# CFR = (failed_deployments + revert_mrs_deployed_to_production)
# / total_successful_deployments * 100The GitLab Ultimate DORA Gap
GitLab's built-in DORA dashboard — available only on the Ultimate tier at $99 per user per month — gives you a starting point, but it has four meaningful gaps that matter for production use:
- No external incident data. GitLab's MTTR calculation is based entirely on deployment timestamps. It does not ingest alert data from PagerDuty, Opsgenie, or incident.io. If your on-call workflow happens outside GitLab, your MTTR number will be wrong — it will show the time between a failed deploy and the next successful deploy, not the actual time your team spent restoring service.
- No cross-signal CFR correlation. GitLab does not correlate code coverage drops (from Codecov or GitLab's own coverage reports) with change failure rate. A deployment that caused a production incident but technically succeeded at the pipeline level will not be classified as a failure unless you manually flag it.
- No per-team breakdown without GitLab Groups. The native DORA dashboard aggregates at the project or group level. To get per-team DORA metrics, your organization must be structured with GitLab sub-groups that map to your engineering team boundaries — a requirement that many organizations cannot or do not want to enforce purely for metrics purposes.
- Premium and Free tiers get nothing. Teams on GitLab Premium ($29/user/mo) or the free tier have no native DORA dashboard at all. The DORA metrics API endpoints themselves are also locked to Ultimate, meaning you cannot even query
/api/v4/projects/:id/dora/metricswithout an Ultimate license.
The deployment-centric API approach described in the previous sections works on all GitLab tiers — it does not depend on the DORA-specific API endpoints. You are querying the general-purpose Deployments and MergeRequests APIs, which are available on all plans.
Wiring GitLab Issues for Full Lead Time Traceability
The GraphQL query above gives you MR-to-deploy lead time — the time from when code was merged to when it was in production. For a more complete picture of lead time that includes the time spent in planning, design, and code review, you can trace the full lifecycle from issue creation through deployment using GitLab's resource state events API.
# Step 1: Fetch state events for an issue to find when it moved to "In Progress"
GET https://gitlab.example.com/api/v4/projects/42/issues/:iid/resource_state_events
-H "PRIVATE-TOKEN: glpat-xxxx"
# Response fields:
# [].state → "opened", "closed", etc.
# [].created_at → when the state change happened
# [].user.username → who triggered the state change
# Step 2: Find the MR linked to this issue
GET https://gitlab.example.com/api/v4/projects/42/issues/:iid/related_merge_requests
-H "PRIVATE-TOKEN: glpat-xxxx"
# Step 3: Get the MR's mergedAt and the deployment createdAt
# (use the GraphQL query from the Lead Time section above)
# Full lead time per issue:
# first_deployment.created_at − issue.created_at
# (measures "idea to production" — the complete DORA Lead Time definition)
# MR-only lead time (narrower, but more consistent):
# first_deployment.created_at − merge_request.merged_atUsing issue creation time as your lead time start gives you the most complete picture — it captures planning and review overhead that commit-based measurements miss. The tradeoff is that it requires clean issue-to-MR linkage in your workflow. If your team does not consistently link MRs to issues, the MR merged_at timestamp is the more reliable and consistent lead time start point.
DORA Benchmarks: How Does Your GitLab Team Compare?
Once you have your four metrics, compare them against the DORA research performance tiers. These benchmarks are from the annual DORA State of DevOps report and apply regardless of which toolchain you use — the targets are identical whether you are on GitLab, GitHub, or Azure DevOps.
| Metric | Elite | High | Medium | Low |
|---|---|---|---|---|
| Deployment Frequency | On-demand (multiple/day) | Daily | Weekly | Monthly |
| Lead Time for Changes | < 1 hour | < 1 day | 1–7 days | 1–6 months |
| Change Failure Rate | 0–5% | 5–10% | 10–15% | 15–45% |
| MTTR | < 1 hour | < 1 day | 1–7 days | 1–6 months |
Most GitLab teams land in the Medium tier on Deployment Frequency. A common contributor is the use of manual approval jobs in .gitlab-ci.yml — the when: manual directive pauses the pipeline at the production deployment stage and requires a human click to proceed. Teams that replace manual approval gates with automated quality gates (coverage thresholds, static analysis pass/fail) and use GitLab Environments with protected branch rules typically see Deployment Frequency double within one quarter.
Common GitLab DORA Measurement Mistakes
These are the four most frequent errors GitLab teams make when measuring DORA metrics — each one produces numbers that look plausible but measure the wrong thing.
Counting pipeline runs instead of environment deployments
GitLab pipelines run on every push to every branch, every merge request update, and every scheduled trigger. The /api/v4/projects/:id/pipelines endpoint returns all of them. If you count pipeline runs as "deployments," your Deployment Frequency will be wildly inflated — potentially 50–100× higher than your actual production deployment rate on an active repository. Always query the /api/v4/projects/:id/deployments endpoint with an explicit environment=production filter.
Using finished_at instead of created_at for deployment timestamps
The GitLab Deployments API returns both created_at and finished_at fields. finished_at is when the deployment job finished running. created_at is when the deployment was triggered and the artifact was promoted to the environment — which is the correct timestamp for DORA Lead Time calculations. Using finished_at adds the duration of your deployment job (sometimes 5–20 minutes for container builds) to every lead time measurement, inflating your median lead time and making your team look slower than it actually is.
Not filtering by environment — mixing staging deploys with production
Many GitLab CI configurations deploy to multiple environments: staging, pre-production, and production. If you query the Deployments API without specifying environment=production, you will include staging and pre-production deployments in your count. This inflates Deployment Frequency, shortens Lead Time (because staging gets deployments hours before production), and contaminates your CFR with non-production failures. Always pass an explicit environment name filter.
GitLab Free plan does not expose full deployment history via API
GitLab imposes API rate limits and data retention differences by tier. On GitLab Free (self-managed or GitLab.com), the Deployments API will return results but older deployments may be pruned from the database depending on your data retention configuration. If you are self-hosting on GitLab Community Edition, verify that your deployment_retention_period is set to a window large enough for your measurement lookback before you invest in building the data pipeline.
Key takeaway on GitLab DORA measurement
GitLab has all the data you need for accurate DORA metrics — the challenge is using the right endpoints with the right filters and timestamp fields. The most common failure mode is relying on pipeline-level data (runs, jobs, stages) when you need deployment-level data (environment promotions). Get the endpoint and timestamp choice right first, then layer in the cross-signal correlation for CFR and MTTR.
How Koalr Automates DORA Metrics for GitLab Teams
Koalr connects to GitLab through a single OAuth connection that reads Repositories, Pipelines, Environments, and Issues. It handles all of the correlation logic described in this guide automatically — no data pipeline to build or maintain.
Specifically, Koalr's GitLab integration provides the following out of the box:
- Deployment Frequency — queries the Deployments API filtered to your production environment, deduplicates re-triggered jobs, and tracks per-project frequency trends with week-over-week comparison.
- Lead Time — correlates MR
merged_attimestamps with deploymentcreated_attimestamps via commit SHA matching, with optional fallback to issue-to-deploy traceability for teams that maintain clean issue linkage. - Change Failure Rate — combines failed deployment status with revert MR detection and optional PagerDuty or Opsgenie alert correlation so that production incidents that slip through as "successful" pipeline runs are still captured in your CFR.
- MTTR — integrates with PagerDuty, Opsgenie, and incident.io as MTTR sources when your incident workflow lives outside GitLab, giving you accurate restoration timestamps that reflect actual on-call response times rather than deployment-to-deployment gaps.
- Deploy risk scoring — adds a 32-signal pre-deployment risk score before each MR merges, based on commit entropy, file churn, author file expertise, DDL migration detection, and coverage delta. Teams that catch high-risk MRs before they merge to main see measurable improvements in CFR within the first month.
Koalr works on GitLab Free, Premium, and Ultimate — it does not depend on the Ultimate-only DORA API endpoints, so teams on any GitLab tier get the same full DORA dashboard.
Get accurate GitLab DORA metrics without building the pipeline yourself
Koalr connects to GitLab in one OAuth step and calculates all four DORA metrics automatically — with correct environment filtering, MR-to-deploy correlation, and deploy risk scoring on every merge request. Works on all GitLab tiers.