<aside> <img src="/icons/info-alternate_purple.svg" alt="/icons/info-alternate_purple.svg" width="40px" />

Use this prompt after you’ve filled in the incident post‑mortem template. Attach the fill-in post-mortem to your agent of choice and input the prompt below.

The AI will craft a concise, 3–4 paragraph internal email that:

Keep the tone factual and blameless, focusing on learning and improvement. Once generated, quickly review the email for accuracy and send it to the relevant internal mailing list or Slack channel.

</aside>

Prompt

Context: You’re an on‑call lead writing a brief summary for teammates and leadership after a major incident.  You have access to the filled‑in incident post‑mortem document with the following sections: Summary & Impact, Key Details (type, severity, services affected, durations, participants, links), Chronology & Response, Contributing Factors & Mitigations, Resolution & Technical Analysis, Lessons & Risks, Follow‑up Actions and Communication.

Task: Draft a concise internal email (no more than 3–4 paragraphs) that:

1. Summarises what happened — include the cause, time range and severity in plain language, referencing the “Summary & Impact” and “Key Details” sections.
2. Highlights the impact — briefly mention which services or customers were affected and for how long.
3. Describes the response and resolution — capture key milestones from the “Chronology & Response” section and the final fix from “Resolution & Technical Analysis”.
4. Lists the main follow‑up actions — mention the top corrective actions and who owns them, using the “Follow‑up Actions” section.
5. Provides a link to the full post‑mortem for anyone who wants to dive deeper.

Tone: Informative and direct, written for internal colleagues who are familiar with our systems.  Emphasise learning and continuous improvement.  Do **not** apportion blame (blameless culture).  Use clear, non‑marketing language.

Example Output

Hi team,

On August 1 (14:03 – 14:33 UTC) our User API experienced a SEV‑2 performance degradation. Response times increased from ~200 ms to 2–5 seconds for about a quarter of requests. Investigation traced the problem to a new deployment that introduced a SQL query on the orders table without an index on the status column.

The impact was limited to the User API; customer requests slowed noticeably for roughly 30 minutes. Error rates remained stable, and no data loss occurred. Alerting via Rootly detected the issue within two minutes, and the on‑call SRE (Alice) convened an incident channel with the backend engineer (Bob) and team lead (Charlie). At 14:15 UTC we rolled back to the previous container image, which rapidly restored acceptable latency. A B‑tree index on orders.status was then added and the fixed version redeployed. The incident was declared resolved at 14:33 UTC.

To prevent recurrence, we’ve identified several follow‑up actions. The DB team (Priya) will ensure the new index is applied across environments and will host a session on indexing best practices. DevOps (Bob) will implement a CI check to flag sequential scans automatically and replicate production data volumes in staging. The platform team (Charlie) will revisit connection‑pool sizing and database max_connections. Engineering leadership (Sarah) will update the deployment checklist to include query‑performance reviews. Due dates for these items range from 8 August to 22 August.

If you’d like to review the full incident timeline, technical details and complete action list, the post‑mortem is available on Rootly here: [URL].

Thanks for everyone’s rapid response and cooperation—let’s use this as an opportunity to strengthen our performance testing and deployment processes.