Incident ManagementMarch 14, 2026 · 12 min read

How to migrate from Opsgenie to incident.io in 30 days

Atlassian has announced Opsgenie shuts down April 5, 2027. If you're choosing incident.io, this is the exact migration playbook — week by week, with the API calls you need and a strategy for preserving your DORA MTTR data through the transition.

Opsgenie end-of-life: April 5, 2027 — all data deleted on this date.

Atlassian has officially confirmed the shutdown. Every Opsgenie schedule, escalation policy, alert history, and team structure is permanently deleted on this date. Export now.

Why incident.io is the right destination for most Opsgenie users

Not every Opsgenie user should move to incident.io — PagerDuty is better for large enterprise IT and multi-region compliance use cases. But for software engineering teams of 20–500 people using GitHub, Slack, and shipping frequently, incident.io is the natural migration target. Here's why:

  • Slack-native, not bolt-on. incident.io was built around Slack from day one. PagerDuty's Slack integration is an afterthought bolted onto a phone-call-first architecture. If your team lives in Slack, incident.io's incident declaration, status updates, and escalations all happen inside Slack naturally.
  • Purpose-built postmortem tooling. incident.io auto-generates a timeline of every Slack message, alert, and action taken during an incident. PagerDuty has nothing comparable. Opsgenie had no postmortem tooling at all.
  • Modern API with official Opsgenie migration support. incident.io has documented migration paths from Opsgenie, including schedule and escalation policy import. The API is well-documented and REST-native.
  • Software team bias vs. IT/infrastructure bias. PagerDuty was built for IT operations teams managing physical infrastructure. incident.io was built for software delivery teams. If your incidents come from deployments, not hardware failures, incident.io thinks the same way your team does.
  • Pricing per responder, not per seat. incident.io charges per active responder, which means engineers who are never on-call don't count against your bill. For most engineering teams, this is meaningfully cheaper than PagerDuty's seat-based model.

What you'll lose (and what you won't)

Migration honesty: incident.io is not a drop-in replacement. Some things migrate cleanly, some things require manual work, and some things you will permanently lose unless you export them before April 5, 2027.

What you keep: Alert routing logic and conditions can be rebuilt in incident.io. Escalation policy structures map directly to incident.io's escalation paths. On-call rotation schedules rebuild cleanly using incident.io's schedule builder. Team structures import via the incident.io API.

What you lose: Opsgenie-specific alert source integrations will need to be reconfigured — each monitoring tool (Datadog, Grafana, Prometheus, etc.) needs its webhook URL updated from Opsgenie to incident.io. Historical alert data before your migration date is gone unless you export it first. Opsgenie mobile app push notifications will stop working — users need to install the incident.io app.

DORA data risk: If your DORA metrics platform reads MTTR from Opsgenie incidents, you will have a data gap during and after migration. Your MTTR charts will stop at the date you cut over, then restart from zero at the new platform. This gap makes it impossible to show trend improvement to engineering leadership. The solution — covered in detail at the end of this guide — is to export your incident history before cutover and reconnect your metrics platform to incident.io immediately after.

Days 1–7: Export and document

Do not touch incident.io yet. Week one is entirely about extracting everything from Opsgenie before you change anything. This is also your safety net: if the migration goes wrong, you have a complete backup.

Step 1: Export team structures. Use the Opsgenie API to pull all teams and their members:

GET https://api.opsgenie.com/v2/teams
Authorization: GenieKey YOUR_API_KEY

Step 2: Export on-call schedules. Include rotation details:

GET https://api.opsgenie.com/v2/schedules?expand=rotation
Authorization: GenieKey YOUR_API_KEY

Step 3: Export escalation policies:

GET https://api.opsgenie.com/v2/policies/escalation
Authorization: GenieKey YOUR_API_KEY

Step 4: Export alert history. This is paginated — you need to loop through all pages. Export at least 12 months. Here is a shell script that handles pagination:

#!/bin/bash
# Export Opsgenie alert history (last 12 months)
API_KEY="YOUR_API_KEY"
OFFSET=0
LIMIT=100

mkdir -p ./opsgenie-export

while true; do
  RESPONSE=$(curl -s \
    "https://api.opsgenie.com/v2/alerts?limit=$LIMIT&offset=$OFFSET" \
    -H "Authorization: GenieKey $API_KEY")

  echo "$RESPONSE" >> ./opsgenie-export/alerts-$OFFSET.json

  COUNT=$(echo "$RESPONSE" | jq '.data | length')
  if [ "$COUNT" -lt "$LIMIT" ]; then break; fi

  OFFSET=$((OFFSET + LIMIT))
  echo "Exported $OFFSET alerts..."
  sleep 0.5
done

echo "Export complete. Check ./opsgenie-export/"

Step 5: Document custom integrations. Open Opsgenie Settings → Integrations and make a spreadsheet: integration name, type (email, API, webhook), the source tool (Datadog, Grafana, etc.), and the Opsgenie endpoint URL that tool is currently sending to. You will update each of these to point to incident.io in week three.

Days 8–14: Set up incident.io

Now create your incident.io workspace. Do not tell your team yet — this week is configuration only, not cutover.

  • Create account and connect Slack. incident.io requires a Slack workspace connection during setup. This installs the incident.io bot and creates the default #incidents channel.
  • Create incident severity levels. Map your Opsgenie priority levels to incident.io severity tiers. Typical mapping: Opsgenie P1 (Critical) → incident.io Critical, P2 (High) → Major, P3 (Moderate) → Minor, P4 (Low) → Low. Configure these in incident.io Settings → Incident Types.
  • Import team structure. incident.io supports team import via its API. Use the team data you exported in step 1 to create matching teams in incident.io. This ensures escalation policies reference the correct people.
  • Rebuild on-call schedules. incident.io has a visual schedule builder. Use your exported rotation data to recreate each schedule. Pay attention to timezone settings — Opsgenie and incident.io both store schedules in UTC internally, but the UI may display in local time.
  • Configure escalation policies. Recreate each Opsgenie escalation policy in incident.io. The structure is similar: notify user A, wait N minutes, then notify user B or team C.

Days 15–21: Migrate alert integrations

This is the most operationally sensitive week. You are updating webhook URLs in your monitoring tools to point to incident.io instead of Opsgenie, while keeping both systems active in parallel.

For each monitoring tool in your spreadsheet from week one, go to that tool's alert/webhook configuration and update the endpoint URL to the incident.io inbound webhook URL. incident.io generates a unique inbound URL for each integration type — find these in incident.io Settings → Alert Sources.

Common tools to update: Datadog (Alerts → Notification settings → update the Opsgenie channel to the incident.io webhook), Grafana (Alerting → Contact points → replace Opsgenie contact point with incident.io webhook), Prometheus Alertmanager (update alertmanager.yml receivers section), AWS CloudWatch (SNS topic → update subscription endpoint).

After updating each integration, trigger a test alert from that tool and verify it appears in incident.io. Do not remove the Opsgenie integration from the monitoring tool yet — run both in parallel until you complete validation in week four. Yes, you will get duplicate alerts during this window. That is intentional.

Days 22–28: Validation and cutover preparation

Before you flip the switch, run a full simulated incident in incident.io with your team.

  • Announce to your team: "We're doing a test incident in incident.io on [date]. Treat it like real."
  • Trigger a test alert through your primary monitoring tool. Verify it routes to incident.io and fires the correct escalation policy.
  • Walk through the full incident response flow in Slack: acknowledge, create incident, assign roles, post status updates, resolve.
  • Verify the escalation path fires correctly if the primary responder does not acknowledge within the SLA window.
  • Update on-call schedules in incident.io with real upcoming rotation dates — not just test data.
  • Send a team communication: "We switch to incident.io on [specific date]. incident.io is now our primary on-call platform. Opsgenie alerts will stop on that date."

Day 30: Cutover

Cutover day. Do this during a low-risk deployment window — not on a Friday, not during a major release.

  1. Disable Opsgenie alert routing. In Opsgenie, go to Settings → Integrations and disable (not delete) each integration. This stops new alerts from routing through Opsgenie while preserving your configuration as a fallback.
  2. Enable all incident.io integrations. Confirm every monitoring tool is now routing exclusively to incident.io. Verify with a test alert.
  3. Monitor for 48 hours. Keep an incident channel active and watch for any missed alerts or routing failures. Have the Opsgenie configuration ready to re-enable if something goes wrong.
  4. After stable: export final Opsgenie snapshot. Once you've confirmed incident.io is working correctly, run your alert export script one final time to capture any incidents that occurred during the migration window. Then disable your Opsgenie account. Do not delete it — keep it disabled until April 5, 2027 as a read-only archive.

Protecting your DORA MTTR data

This is the most commonly overlooked part of the migration, and the one that creates the most awkward conversations with engineering leadership.

The gap risk: If your engineering metrics platform (Koalr, or another DORA tool) reads MTTR from Opsgenie incidents, there will be a data gap the moment you cut over. Your MTTR chart will show historical data up to cutover date, then nothing — until the new platform accumulates enough incidents to calculate a meaningful average. Depending on your incident volume, this gap could be weeks or months of missing data.

Solution 1: Export incident history before cutover. Use your alert export from week one to provide historical incident data to your metrics platform. Most platforms can ingest historical incident data via API or CSV import. This fills the gap retroactively after you reconnect to incident.io.

Solution 2: Reconnect your metrics platform immediately. The moment you cut over to incident.io, update your metrics platform to read from incident.io instead of Opsgenie. New incidents will start flowing immediately. Combined with solution 1 for historical data, your MTTR chart will show no gap.

Koalr migration wizard: Koalr has a 3-step wizard in Settings that handles both automatically. Step 1 exports your Opsgenie incident history into Koalr's data store. Step 2 maps Opsgenie severity levels and user identities to their incident.io equivalents. Step 3 reconnects Koalr's MTTR calculation to read from incident.io after cutover. The result is uninterrupted MTTR trend data with no chart gap — your engineering dashboard looks the same before and after the migration.

Start your migration — Koalr's migration wizard handles the data preservation automatically

Export your Opsgenie history, map your severity levels, and reconnect MTTR to incident.io in three steps. No gap in your trend data.