Skip to content
Incident Response

Ransomware Crisis Communication: The 72-Hour Plan for Management

SecTepe Editorial
|
|
7 min read

The first 72 hours after a ransomware incident are more decisive communicationally than they are technically. Anyone who gets the technical right but stumbles on communication loses trust, risks fines, and later faces damage claims. This playbook is for management, not for IT.

The Communicative Situation After Hour Zero

Within the first hours, the following will be talking to you – whether you want them to or not:

  • Employees: what should we tell customers? Can we still work? Who's liable?
  • Customers: why isn't your email working? When will it run again? Is our data affected?
  • Regulators: BSI (NIS-2), data protection authority (GDPR), possibly BaFin/sector-specific.
  • Insurer: first notice, damage assessment, forensics consent.
  • Media/social networks: speculation spreads faster than facts.
  • Possibly: the attacker themselves with ransom demand and publication threat.

Hour 0 – 6: What Management Personally Does

  1. Set up a crisis team: management + CISO/IT lead + DPO + legal counsel + external IR partner. Physically or via E2E-encrypted channel.
  2. Important: do not communicate via the compromised systems. Own emergency mobile setup, separate mail account, ideally a Matrix/Jitsi channel not affected by the incident.
  3. Engage forensics: evidence preservation starts now. Don't restart anything, don't "quickly fix" anything. Disk images first.
  4. Notify the insurer: many policies require initial notice within 24 h. Late notice = coverage forfeiture risk.
  5. First employee communication: short, factual, no speculation. "We have a security incident, we're investigating, we'll keep you informed. Please no external communication outside the approved messaging."

Hour 6 – 24: Regulator Communication

GDPR Art. 33 demands notification to the supervisor within 72 h once a suspected personal data protection breach exists. NIS-2 Art. 23 demands an early warning to the competent authority within 24 h once the suspicion of a significant incident solidifies.

Important: early warning is not final report. It only says: "We have what looks like a significant incident, here's what we know so far." You can update later. Late notification is far worse than incomplete notification.

Hour 24 – 48: External Communication

By now at the latest you need a written messaging template – approved by management, legal, and DPO:

  • Customer email: what happened (in one sentence), what it means for them, which data is potentially affected, what's being done, what the status is, how they can reach you.
  • FAQ on the website: highly visible, regularly updated with timestamp.
  • Status page: technical recovery statuses (which services are back). External hosting if your own site is affected.
  • Employee messaging rules: who is allowed to say what to externals? In most cases: only the explicitly named spokesperson. Everyone else refers.

Hour 48 – 72: Media and Detailed Regulator Report

If media has caught wind (or the attacker goes public): proactive press release instead of reactive statement. Content: what happened, what's being done, what we don't know, when the next update comes. Avoid speculation.

Regulators expect a more detailed interim report by the end of the 72 h. Anyone with an integrated platform can deliver audit trail, asset list, incident classification, and affected data categories in hours instead of days.

What Management Avoids in Every Statement

  • "We were the victim of a hacker attack": passive-defensive, sounds like minimization. Instead: "On day X we detected a security incident."
  • "No data is affected" – without knowing it. A later retraction destroys trust.
  • Blame on employees or suppliers: legally risky, communicationally damaging.
  • Public ransom discussions: negotiation belongs in a separate, non-public channel with forensics and legal.

Preparation – Now, Not During the Incident

  • Crisis communication templates: employee mail, customer mail, press release, FAQ. Pre-approved by management + legal + DPO.
  • Out-of-band communication channel: not affected by the incident. Ideally a Matrix server on separate infrastructure.
  • Status page externally hosted: on Hetzner, Cloudflare, or similar – not in your own DC.
  • Forensics partner contract: retainer with assured response time. Don't negotiate during the incident.
  • Annual tabletop exercise: management runs it personally. At least once with a crisis PR consultant.
  • D&O policy and cyber policy: emergency numbers on management's mobile.

Conclusion

The first 72 hours of a ransomware incident are communication thin ice: one misstep and it breaks. Management with a written 72 h plan, templates, an out-of-band channel, and a tested tabletop exercise before the incident is functional in the real event. Anyone improvising during the incident risks fines, trust, and lawsuits.