How to Track Cursor AI Coding Adoption Using the Cursor API
Cursor has a public API for Enterprise teams. Most engineering leaders have Cursor licenses but no idea whether engineers are actually using the tool — or getting any value from it. This post walks through every Cursor API endpoint, how auth works, what data is available, and how Koalr uses it to surface real adoption metrics alongside your DORA and flow data.
Enterprise plan required
The Cursor API described in this post is only available on Cursor's Enterprise plan. Business plan teams get some usage data in the Cursor dashboard, but the programmatic API — including per-user events, AI code attribution, and DAU time series — requires Enterprise. If you are on Business, you can still read this post to understand what data you would have access to at Enterprise.
The adoption visibility problem
The typical state of Cursor adoption in a 50-200 person engineering organization goes like this: leadership purchases seats, rolls out Cursor to the team, sees a spike in initial activations, and then has essentially no visibility into what happens next. The Cursor dashboard shows a high-level seat count. There might be a periodic all-hands slide about AI tool adoption. But nobody can answer questions like: which teams are actually using it daily? Which engineers have stopped? Are the engineers with the highest Cursor usage shipping faster? Did the Q1 rollout actually change behavior, or did people activate once and go back to their previous workflow?
These are not edge-case questions — they are the questions you need to answer to justify the license spend, to know where to invest in training, and to understand whether the AI tooling is actually moving the metrics that matter. The Cursor API makes answering them possible. It exposes daily active user data, per-request event logs, model usage breakdowns, spend data, and — critically — per-commit attribution of AI-generated code that can be linked directly to your GitHub pull request data.
This post covers the full API surface and how to build real adoption tracking from it.
What the Cursor API actually provides
Authentication
The Cursor API uses HTTP Basic Auth. The credential format is team_id:api_key — your Team ID colon your API key, Base64-encoded and sent in the Authorization header. You generate the API key in Cursor's Team Settings under the API section. The Team ID is visible in the Settings page URL (it is the UUID segment in the path).
Authorization: Basic <base64(team_id:api_key)>
# Example with curl:
curl https://api.cursor.com/teams/members \
-H "Authorization: Basic $(echo -n 'YOUR_TEAM_ID:YOUR_API_KEY' | base64)"All requests go to the base URL https://api.cursor.com. Responses are JSON. The API does not use pagination tokens for most endpoints — results are scoped by date range parameters. Rate limits run 20–100 requests per minute depending on the endpoint, with event-stream endpoints on the lower end and analytics summary endpoints on the higher end.
Admin API endpoints
The Admin API provides team management and raw usage data. These endpoints are designed for operational use — syncing members, checking spend limits, and pulling raw event logs for analysis.
GET /teams/members — Returns the full list of team members with their email addresses and join dates. This is the ground truth for your seat roster. Koalr syncs this nightly and cross-references it with your GitHub user list to match Cursor users to GitHub contributors.
// GET /teams/members response shape
{
"members": [
{
"id": "user_abc123",
"email": "alice@yourcompany.com",
"joinedAt": "2025-11-01T09:14:22Z",
"role": "member"
}
],
"total": 48
}GET /teams/daily-usage-data — Daily active users, total requests per day, and a model breakdown showing which AI models the team is using. Accepts startDate and endDate query parameters in ISO 8601 format. This is the primary endpoint for DAU trend data.
// GET /teams/daily-usage-data?startDate=2026-02-01&endDate=2026-03-01
{
"days": [
{
"date": "2026-02-14",
"activeUsers": 31,
"totalRequests": 4820,
"modelBreakdown": {
"claude-3-5-sonnet": 2140,
"gpt-4o": 1890,
"gemini-2.0-flash": 790
}
}
]
}GET /teams/filtered-usage-events — Per-request event log with user, model, timestamp, and request type. The type field distinguishes between autocomplete, chat, and composer — three materially different usage patterns with different implications for productivity impact. Autocomplete is passive (triggered as you type), chat is conversational (deliberate query), and composer is agentic (multi-file edit mode). This endpoint is rate-limited more conservatively since it returns high-volume row-level data.
// GET /teams/filtered-usage-events?userId=user_abc123&startDate=2026-03-01
{
"events": [
{
"id": "evt_xyz789",
"userId": "user_abc123",
"email": "alice@yourcompany.com",
"type": "autocomplete",
"model": "claude-3-5-sonnet",
"timestamp": "2026-03-10T14:23:11Z",
"accepted": true,
"linesAccepted": 4
},
{
"id": "evt_xyz790",
"userId": "user_abc123",
"email": "alice@yourcompany.com",
"type": "chat",
"model": "gpt-4o",
"timestamp": "2026-03-10T14:31:05Z"
}
]
}GET /teams/user-spend-limit — Returns the per-user compute spend caps configured for your team. This endpoint is less useful for adoption tracking and more relevant for cost governance — but the per-user spend cap configuration is a useful operational tool for teams that want to prevent outlier usage from driving up costs during rollout.
Analytics API endpoints
The Analytics API provides pre-aggregated time series data optimized for dashboard use. These endpoints are faster and less rate-limited than the raw event endpoints because Cursor pre-computes the aggregations server-side.
GET /analytics/team/dau — Daily active users as a time series. Accepts a window parameter: 30d, 60d, or 90d. Returns one data point per day with the count of users who made at least one request. This is the primary input for the DAU trend chart in Koalr's AI Analytics dashboard.
GET /analytics/team/model-usage — Model distribution for the specified window. Returns a breakdown of total requests by model across the team. Useful for understanding which AI models your team gravitates toward — relevant both for cost analysis (Claude Sonnet costs more per token than Gemini Flash) and for understanding capability utilization (teams leaning heavily on autocomplete-tier models may not be leveraging the more capable models for complex tasks where they add more value).
// GET /analytics/team/model-usage?window=30d
{
"models": [
{ "model": "claude-3-5-sonnet", "requests": 58420, "percentage": 44.2 },
{ "model": "gpt-4o", "requests": 47110, "percentage": 35.7 },
{ "model": "gemini-2.0-flash", "requests": 18350, "percentage": 13.9 },
{ "model": "o1-mini", "requests": 8230, "percentage": 6.2 }
],
"totalRequests": 132110
}GET /analytics/team/top-users — Users ranked by total request count for the window. This is the input for the top users leaderboard in Koalr — useful for identifying your power users (who can become internal champions for AI adoption) and for spotting engineers who have licenses but zero usage.
AI Code Tracking API
The most analytically powerful endpoint in the Cursor API — and the one that is hardest to use standalone — is the AI code attribution endpoint.
GET /teams/ai-generated-code — Returns per-commit data on how many lines of code in each commit came from accepted Cursor autocomplete suggestions. The response includes the commit SHA, the author, the timestamp, the total lines in the commit, and the lines attributed to Cursor autocomplete acceptance. Critically, it links to the GitHub commit via SHA — which means you can cross-reference this data with your pull request data to see AI code attribution at the PR level.
// GET /teams/ai-generated-code?startDate=2026-03-01&endDate=2026-03-15
{
"commits": [
{
"sha": "a3f8c2e1d9b4f7...",
"author": "alice@yourcompany.com",
"timestamp": "2026-03-10T16:42:00Z",
"repository": "yourcompany/backend",
"totalLines": 187,
"aiGeneratedLines": 54,
"aiPercentage": 28.9
}
]
}This is the data that enables the genuinely interesting analysis: not just whether engineers are using Cursor, but whether the code they are writing with Cursor is different in quality, risk, or outcome from code written without it. We will come back to this when we cover the DORA correlation analysis below.
The 6 metrics Koalr tracks from Cursor
Raw API data is not the same as useful metrics. Koalr normalizes and correlates the Cursor API output into six specific metrics that answer the questions engineering leaders actually care about.
1. Daily Active Users (DAU)
The core adoption metric. DAU is the count of licensed users who made at least one Cursor request on a given day. Koalr tracks this as both an absolute count and as a percentage of total licensed seats — so if you have 48 seats and 31 users were active yesterday, your DAU is 31 and your daily adoption rate is 64.6%.
The DAU trend chart in Koalr's AI Analytics page shows a 30-day rolling window. The pattern to watch is not the peak DAU — it is the floor. A team where DAU never drops below 70% of licensed seats has genuinely embedded the tool. A team where DAU oscillates between 10% and 40% is seeing burst usage (people try it on a hard problem, then forget about it) rather than habitual adoption.
2. Total requests by type
Total requests per day, broken down by type: autocomplete, chat, and composer. The mix matters. A team with 95% autocomplete requests has a different usage pattern from a team with 40% chat and 30% composer — the latter is using Cursor for higher-cognitive-load tasks (planning, code generation from specs, multi-file refactors) where the productivity impact per request is likely larger.
3. Model distribution
Which AI models is the team actually using? This is both a cost signal and a capability signal. Teams defaulting to the cheapest available model for all tasks may be leaving capability on the table for complex tasks. Teams running most requests through the most expensive models may have an optimization opportunity for routine autocomplete. The model distribution breakdown from /analytics/team/model-usage gives you the full picture.
4. Spend per user per day
Cursor's per-user spend data (from /teams/user-spend-limit and the event log) lets you compute actual compute cost per engineer per day. Averaged across active users, this tells you the real cost of Cursor usage versus the license cost alone — relevant for budgeting and for understanding whether your high-usage engineers are generating disproportionate compute spend.
5. AI code attribution per PR and per developer
Lines of AI-generated code accepted from Cursor autocomplete, rolled up to the PR level by joining the /teams/ai-generated-code SHA data against your GitHub PR data. Koalr shows this on individual developer profile pages (AI code % per recent PR) and on the PR detail view alongside the deploy risk score.
6. Adoption rate
The metric that matters most for ROI conversations: adoption rate, defined as (DAU / total licensed seats) × 100. A 90-day average adoption rate below 40% on a full license rollout is a clear signal that something in the onboarding or workflow integration is not working. Koalr surfaces this as the headline KPI on the AI Analytics page — not total requests, not DAU in isolation, but the percentage of your investment that is actually being utilized.
What "adoption" actually means
License data and adoption data are not the same thing, and conflating them produces misleading conclusions. Koalr tracks four distinct states per developer:
A seat has been assigned to this developer in Cursor Team Settings. They appear in /teams/members. They may or may not have ever opened Cursor.
The developer has made at least one Cursor request — ever. They show up in the usage event log. First activation date is tracked from the earliest event timestamp for that user.
The developer has made at least one Cursor request in the last 7 days. This is the standard active user definition used in DAU calculations and adoption rate.
The developer averages more than 50 Cursor requests per active day. Power users tend to have meaningfully higher AI code attribution percentages and are the best internal candidates to demonstrate productivity impact to the rest of the team.
The gap between Licensed and Activated is waste — seats you are paying for that have never been used. The gap between Activated and Active is churn — engineers who tried Cursor and stopped. The gap between Active and Power User is depth — engineers who use Cursor but have not integrated it deeply into their workflow.
Koalr's AI Analytics dashboard shows all four states for every developer in a single table. Managers can filter by state — pull up the "Licensed but never activated" list and have a very targeted conversation with those engineers about what is blocking them, rather than sending a generic adoption push to the whole team.
Connecting Cursor requests to DORA
The most interesting analysis the Cursor API enables is not adoption tracking on its own — it is the correlation between Cursor usage and the delivery metrics you already care about. Two correlations in particular are worth computing for every team.
Cursor request volume vs. cycle time and throughput
Do developers with higher Cursor request volume have better cycle time (time from first commit to merge)? Do they merge more PRs per week? Koalr computes this correlation across all developers with both Cursor data and GitHub PR data. The methodology is straightforward: segment developers into quartiles by average daily Cursor requests over a 90-day window, then compare median cycle time and weekly PR throughput across quartiles.
In practice, the correlation is not always linear. Heavy chat and composer users often show the strongest cycle time improvement on complex features. Heavy autocomplete users show the most consistent throughput improvement across routine work. Understanding which mode drives value for your team is more useful than a single aggregate number.
AI code attribution vs. rework rate
Cursor's AI code attribution API, combined with Koalr's rework rate metric, enables a question that no AI tool vendor can answer for you: does AI-generated code get reworked more often than code written manually?
Koalr's rework rate is the percentage of lines changed in a PR that modify code introduced within the last 30 days. High rework rate on recently-merged code is a signal of quality issues — the code needed to be fixed shortly after it shipped. By tagging PRs with their AI code attribution percentage (from the Cursor SHA data joined to GitHub PR data), Koalr can compute the median rework rate separately for high-AI-attribution PRs and low-AI-attribution PRs.
This is the analysis that moves AI tool conversations from intuition to evidence. If high-AI-attribution PRs have a rework rate of 12% versus 9% for low-AI-attribution PRs, that is a signal worth investigating — it does not mean Cursor is causing quality issues, but it does mean the team should look at where AI code is landing (is it concentrated in certain file types or service areas?) and whether the review process for AI-heavy PRs is as rigorous as for manually-written code.
AI code attribution vs. change failure rate
The third correlation: compare change failure rate between PRs with high AI code attribution versus PRs with low AI code attribution. Koalr segments PR history into quartiles by AI attribution percentage, then computes the CFR (percentage of deployments that caused incidents) for each quartile. If AI-heavy code ships with a higher or lower incident rate than manually-written code, that is a data point worth having — both for your internal engineering quality conversations and for any external reporting on AI tool effectiveness.
Correlation is not causation
Engineers who use Cursor the most are often your more experienced developers who are self-motivated to adopt new tools — which means they are likely to have lower rework rates and CFR regardless of Cursor. The correlation analysis is most useful when tracked longitudinally: does a developer's rework rate and throughput change after their Cursor usage ramps up? That before-and-after view controls for individual capability differences and is more diagnostic than cross-sectional comparisons.
Setting up Cursor in Koalr
Connecting Cursor to Koalr takes about five minutes. Here is the step-by-step:
Step 1: Generate your Cursor API key
In Cursor, navigate to Team Settings → API. Click "Generate API Key." Copy the key — it will only be shown once. While you are on the Settings page, note your Team ID from the URL. The URL will look like https://cursor.com/settings/team/YOUR_TEAM_ID/... — the UUID segment is your Team ID.
Step 2: Connect in Koalr
In Koalr, navigate to Settings → Integrations → Cursor. Paste your Team ID and API Key into the fields and click Connect. Koalr immediately validates the credentials against the /teams/members endpoint and shows you your seat count to confirm the connection is working.
Step 3: Initial sync
Koalr syncs DAU and usage events nightly. AI code attribution syncs when PRs are processed — which happens continuously as your GitHub integration receives webhook events. On first connection, Koalr backfills 90 days of DAU data from /analytics/team/dau, so your trend charts are populated immediately rather than starting from zero.
Member matching — linking Cursor user emails to GitHub contributor identities — happens automatically using email address as the join key. If an engineer uses different email addresses in Cursor and GitHub, Koalr surfaces them as unmatched in the integration settings page and lets you manually link them.
Reading your Cursor data in Koalr
Once the integration is connected and the initial sync completes, Cursor data surfaces across several parts of Koalr.
AI Analytics page
The primary surface for Cursor data is the AI Analytics page, accessible from the left nav. The page shows:
- DAU trend chart — 30-day rolling window with daily DAU as a line chart and total licensed seats as a reference line. The gap between the two is your inactive seat count.
- Adoption rate KPI tile — 30-day average of (DAU / licensed seats) × 100, with a 7-day trend comparison.
- Model distribution chart — Pie or bar chart showing request volume by AI model for the selected period.
- Top users table — Ranked by total requests, with columns for request count, request type breakdown, AI code attribution %, and activation state.
Developer profile pages
Every developer profile in Koalr shows a Cursor panel when the integration is active. The panel displays the individual engineer's 30-day DAU, average requests per active day, request type breakdown, and AI code attribution percentage on their recent PRs. This is the view that makes individual adoption conversations concrete — a manager can pull up an engineer's profile before a 1:1 and have a specific, data-backed conversation about Cursor usage rather than a vague "are you using the AI tools?" check-in.
AI Chat queries
Koalr's AI chat panel has access to your Cursor data alongside your GitHub and DORA data. Useful queries once your data is synced:
- "Which engineers are using Cursor the most and does it show in their throughput?"
- "What is our 90-day Cursor adoption trend by team?"
- "Compare cycle time between developers in the top Cursor usage quartile vs. the bottom quartile."
- "Which developers have Cursor licenses but have not been active in the last 30 days?"
- "Is there a correlation between AI code attribution percentage and rework rate across PRs this quarter?"
ROI page
Koalr's ROI page computes an estimated time savings figure from your actual Cursor data — not a vendor-supplied benchmark, but a calculation from your real DAU and acceptance rate data. The formula and its inputs are shown explicitly so you can sanity-check it against your own assumptions.
The ROI calculation, made concrete
The ROI conversation around AI coding tools often stalls on weak evidence — vendor statistics from self-reported surveys or controlled lab studies that do not reflect real team dynamics. Koalr builds the ROI calculation from your actual usage data. Here is the formula and the benchmarks behind it:
Cursor ROI calculation
The critical variable in this calculation is the activation rate. The 18.75× ROI is the return per active user. If your team has 48 licensed seats but only 22 active users (46% adoption rate), the effective ROI on the total license spend drops to 8.6×. Still a strong positive return — but understanding the gap between the potential and actual ROI is exactly what Cursor adoption tracking surfaces, and it is what gives you the specific intervention target: get 10 more engineers to active state, and your ROI on the license spend increases materially.
Koalr surfaces this calculation on the ROI page with your actual DAU and acceptance rate data substituted for the benchmark values above, so the number reflects your team's real usage rather than industry averages.
Team-level segmentation
Adoption metrics aggregated across the whole engineering org tell an incomplete story. The more actionable view is by team: which teams have the highest Cursor adoption, and does it show in their delivery metrics?
Koalr maps Cursor user data to your GitHub team structure (pulled from your CODEOWNERS file and GitHub Teams configuration). This lets you compute per-team adoption rates, per-team average daily requests, and per-team AI code attribution percentages — then overlay that with per-team deployment frequency and cycle time from your DORA data.
The pattern that shows up consistently in teams with mature AI tool adoption: the teams with the highest 90-day Cursor adoption rates tend to also show the largest quarter-over-quarter improvement in deployment frequency. This does not prove Cursor is causing the improvement — high-adoption teams also tend to have stronger engineering cultures and more proactive engineering managers — but it does make the case that AI tool adoption and delivery performance are not independent variables.
Conversely, the team-level view surfaces the specific teams where adoption is lagging. A team with 15% adoption three months after the Cursor rollout is a targeted conversation for that team's manager — not a company-wide adoption campaign. The blocker is almost always specific: a workflow that Cursor does not integrate well with, a tech stack where autocomplete is less useful, or a team culture that has not yet built AI tool usage into its norms.
Connecting the full picture
Cursor adoption tracking is most valuable not as a standalone dashboard but as a layer on top of your existing engineering metrics. The questions that matter — does Cursor usage improve throughput? Does AI-generated code have different quality characteristics? Which teams are benefiting most? — require correlating Cursor data with your GitHub PR data, your incident history, and your DORA baselines.
Building that correlation layer from scratch requires pulling from four or five different APIs, normalizing user identity across systems, and building aggregation pipelines before you can start asking questions. Koalr does that work as a platform: the Cursor integration connects in five minutes, the member matching happens automatically, the 90-day backfill runs immediately, and the correlations with PR data, rework rate, and DORA metrics are computed continuously as new data comes in.
If you are running Cursor at scale and want more than a seat count and a dashboard — if you want to know whether the investment is actually moving the metrics that matter — connect Cursor to Koalr and have the answer in your first session.
Connect Cursor to Koalr in under 5 minutes
Koalr syncs your Cursor DAU, usage events, and AI code attribution with your GitHub and DORA data — so you can answer whether Cursor is actually improving delivery, which teams are adopting it, and what the real ROI looks like for your team.