Opsgenie shuts down April 5, 2027

The Opsgenie Migration Checklist: 12 things to do before April 2027

By Andrew McCarron · March 15, 2026 · 12 min read

Opsgenie shuts down April 5, 2027. All data is permanently deleted. Teams that plan their migration now have a 12-month runway with room to test, validate, and course-correct. Teams that wait until Q1 2027 will be in crisis mode.

This is the complete 12-step checklist — organized by phase, with effort estimates and risk ratings for each step.

Phase 1

This week

3 steps

Phase 2

Weeks 1–2

4 steps

Phase 3

Weeks 3–4

3 steps

Phase 4

Weeks 5–6

2 steps

Phase 1 — Immediate (do this week)

01

Audit your current Opsgenie setup

High risk

Effort: 1–2 hours

Document: how many teams use Opsgenie, how many escalation policies you have, how many integrations are connected, and what your peak alert volume is per day. This determines migration complexity. Run `GET /v2/teams`, `GET /v2/escalations`, and `GET /v2/integrations` to get exact counts.

02

Decide on your destination platform

High risk

Effort: 1–2 days

The three credible paths: (1) PagerDuty — best for enterprise, largest integration footprint, highest price; (2) incident.io — best for Slack-native teams, modern UX, good mid-market fit; (3) Koalr — best if you also want DORA metrics and deploy risk prediction in one platform. Do not migrate to Jira Service Management — it is an ITSM tool built for IT helpdesks, not engineering incident response.

03

Archive your incident history now

Critical risk

Effort: 2–4 hours

Opsgenie's retention window means older data may not be accessible later. Run a full export of all alerts via the API now — before you start any migration steps. Store the JSON export in a durable location (S3, GCS, or your data warehouse). You'll need this for DORA metric continuity even after the switch.

Phase 2 — Planning (weeks 1–2)

04

Map all escalation policies to the new platform

High risk

Effort: 4–8 hours

Export every escalation policy via `GET /v2/escalations`. For each policy, document: responder order, delay between steps (in minutes), repeat settings, and notify type (user, team, or schedule). Most platforms import these via their own JSON format — not Opsgenie's. You'll need to manually re-enter or script the transformation.

05

Export and validate all on-call schedules

High risk

Effort: 3–6 hours

Export schedules and rotations via `GET /v2/schedules` and `GET /v2/schedules/{id}/rotations`. Pay special attention to: timezone settings, override rules (individual on-call swaps), and holiday/blackout overrides. These are error-prone to re-enter manually. Validate by running both platforms simultaneously for 1 week.

06

Inventory all integrations and identify who owns each

Medium risk

Effort: 4–8 hours

List every integration in Opsgenie (Datadog, CloudWatch, Grafana, Sentry, PagerDuty, custom webhooks, etc.) and assign an owner. Each integration must be reconfigured manually in the new platform — the API key/webhook secret cannot be migrated automatically. This is typically the most time-consuming part of any incident platform migration.

07

Document notification rules per user

Medium risk

Effort: 2–3 hours

Each user has individual notification preferences in Opsgenie (SMS, voice, push, email — with different schedules for day/night/weekends). Export via `GET /v2/users/{username}/notificationrules`. These must be recreated per-user in the new platform. Brief your on-call team before cutover so they set their own preferences.

Phase 3 — Migration (weeks 3–4)

08

Set up the new platform in parallel — shadow mode

High risk

Effort: 1–2 weeks

Configure your new incident platform completely BEFORE cutting over. Run both platforms simultaneously for at least 1 week: alerts route to Opsgenie as usual, but also replicate to the new platform (use a webhook/integration). This lets you validate that all alert routing works correctly without any on-call risk. Do not cut over during this phase.

09

Reconnect and test each integration end-to-end

Critical risk

Effort: 2–4 hours per integration

For each integration: generate new API credentials in the new platform, update the monitoring tool to send alerts to the new endpoint, trigger a test alert, verify it routes correctly, verify escalation fires correctly if the test alert goes unacknowledged for the configured timeout. Do not mark an integration as migrated until the full alert → escalation path is tested.

10

Validate DORA MTTR continuity before cutover

High risk

Effort: 2–4 hours

Before switching your primary alerting, verify your MTTR baseline will survive the transition. If you're using a DORA metrics platform, check that it can ingest historical Opsgenie data AND new platform data to maintain a continuous trend line. If you're using Koalr, this is handled automatically — the migration wizard archives Opsgenie history and reconnects DORA to your new platform without gaps.

Phase 4 — Cutover + validation (weeks 5–6)

11

Cut over primary alerting — not during a freeze or high-risk deploy window

Critical risk

Effort: 2–4 hours

Choose your cutover timing carefully: not on a Friday, not during a product launch, not during a known high-risk deploy window. Communicate to all on-call engineers 48+ hours in advance. Have a rollback plan (keep Opsgenie active for 2 weeks post-cutover). Have a single point of contact for cutover day who can field any routing issues in real time.

12

Run a 2-week parallel period, then decommission Opsgenie

Low risk

Effort: 2 weeks monitoring

Keep Opsgenie active for 2 weeks after cutover as a fallback. Monitor alert routing daily. Compare MTTR in the new platform to your historical Opsgenie baseline. Only cancel your Opsgenie subscription after you've confirmed: all integrations are healthy, all on-call schedules are correct, and your DORA metrics are continuous. Cancel before April 5, 2027 to avoid prorated charges.

The 4 most common Opsgenie migration mistakes

1

Starting the migration in Q1 2027

Teams that begin in January 2027 have 90 days with zero buffer. Any integration failure or on-call gap becomes a crisis. Start in Q2–Q3 2026 when you have time to course-correct.

2

Making a like-for-like swap to JSM

Jira Service Management is an ITSM tool designed for IT helpdesks. Teams that choose JSM spend months reconfiguring it to behave like Opsgenie. It has no deploy risk prediction, no DORA metrics, and no developer-native workflow.

3

Cutting over without a shadow period

Switching primary alerting without a 1-week parallel shadow period means the first real incident is also your first integration test. Run both platforms simultaneously before the cutover.

4

Losing MTTR continuity in the transition

Most incident platforms start your MTTR counter from zero when you connect. Your historical Opsgenie data is lost unless you export it and import it into your DORA metrics platform before the switch.

Koalr automates steps 3, 10, and 12 — and adds DORA metrics at the same time

Koalr's migration wizard handles the Opsgenie data archive (step 3), validates DORA MTTR continuity (step 10), and tracks your new platform data going forward (step 12) — with no gaps in your DORA trend line. And since you're already migrating your incident tooling, you can add deploy risk prediction and full DORA metrics at the same time.