
Best Threat Intelligence Platform Vendors: Enterprise Buyer's Guide
Threat Intelligence Cybersecurity Enterprise Security
Why Do You Need a Threat Intelligence Platform? Traditional firewalls can’t protect against stolen passwords. VPNs don’t …

Learn the five steps your team should follow when a breach is detected.
• The first hours matter most. Contain the breach by isolating affected systems and disabling compromised accounts. Don’t reboot anything until you’ve taken forensic images
• Figure out what was taken before you notify anyone. You need to know what data was affected, how the attackers got in, and how long they had access. That information shapes everything that follows
• Notification deadlines are strict. GDPR gives you 72 hours from discovery. The SEC gives four business days. Your cyber insurer needs to hear from you immediately or they may deny your claim
• The post-incident review is where the real value is. Update your plan based on what went wrong. Companies that skip this step repeat the same mistakes
IBM’s 2025 report found that companies with tested incident response plans saved over $1.5 million per breach. Companies without a plan spent more and took longer to recover.
The difference isn’t just having a plan on paper. It’s having one your team has practiced, with clear steps everyone knows before a breach forces their hand.
Most response failures happen because people improvise under pressure. They skip steps and destroy forensic evidence by rebooting systems.
This checklist covers the five steps to follow from the moment you detect a breach through the post-incident review.
The first hours after detection set the trajectory for everything that follows. Move fast, but don’t be reckless.
Incident response plan is a documented set of procedures that tells your team exactly what to do when a breach is detected. It defines who’s in charge and who handles what, plus the specific steps for containment and recovery. Companies with tested plans contain breaches faster and pay millions less.
Isolate affected systems. Disconnect compromised machines from the network to stop lateral movement. Don’t shut them down – just disconnect them. If attackers are actively exfiltrating data, cutting network access stops the bleeding.
Disable compromised accounts. Reset passwords and revoke session tokens for any accounts you know or suspect are compromised. If you’re not sure which accounts are affected, start with admin and privileged accounts.
Preserve evidence. This is where most teams make their biggest mistake. Don’t reboot compromised systems. Take full forensic images including memory dumps first. Rebooting destroys volatile data that your investigation team needs to trace the attacker’s path.
Activate your response team. Your plan should define who leads the response. If you don’t have predefined roles, assign them now: incident commander, technical lead, and legal counsel at minimum. See our full guide on building a data breach response plan for how to set this up before a breach.
Start the documentation clock. From this moment forward, log every action and decision with timestamps. This documentation is critical for regulatory compliance and insurance claims. It may also be needed in litigation.
Once the breach is contained, you need to understand what happened. Your notification clock is already running, so assess the damage and prepare notifications in parallel.
Identify the attack vector. How did attackers get in? Stolen credentials, phishing, an unpatched vulnerability, a compromised vendor? The entry point determines what you need to fix and what other systems might be affected. The Verizon 2025 DBIR found that stolen credentials were the top initial access vector, involved in 22% of all breaches.
Determine what data was accessed. Review logs to identify which databases, files, or systems the attacker touched. Categorize the data: was it personal information (names, SSNs) or health records? The data type determines your notification obligations.
Establish the timeline. When did the breach start? When was it detected? How long did attackers have access? IBM found that the average breach takes 241 days to identify and contain. The longer the access window, the more data was likely exposed and the more complex your response will be.
Count the affected records. Regulators and notification laws often have thresholds based on the number of affected individuals. You need this number for regulatory filings and to determine which state laws apply.
Notification is where legal requirements and communication strategy collide. Get this wrong and you face additional fines on top of the breach costs.
Breach notification is the legal requirement to inform regulators and affected individuals when personal data is compromised. Timelines vary by jurisdiction: 72 hours under GDPR, four business days for SEC-regulated companies, and 30-60 days under most US state laws.
Notify your legal team and insurer first. Before you contact anyone else, get your lawyer and your cyber insurer involved. Your insurer may have specific requirements about who handles the response, and late notification can be grounds for claim denial.
File with regulators on time. GDPR gives you 72 hours from discovery. The SEC gives four business days. HIPAA allows 60 days. US state laws vary widely. The strictest deadline that applies to you is your target. If you have customers in multiple jurisdictions, work from the shortest deadline outward.
Notify affected individuals. Tell them what happened and what they should do to protect themselves. Be specific and honest. Vague notifications erode trust faster than the breach itself. Have these templates ready before you need them.
Consider law enforcement. If criminal activity is involved, contact the FBI’s IC3 or your local FBI field office. Early law enforcement involvement can help with investigation. In some cases it can also extend notification deadlines.
Containment stops the bleeding. Recovery gets you back to normal. Remediation makes sure it doesn’t happen again.
Fix the entry point. Whatever vulnerability the attacker exploited, patch it now. If they got in through stolen credentials, force password resets across affected systems. If they exploited an unpatched vulnerability, deploy the patch. If a vendor was compromised, revoke their access until they’ve confirmed their own remediation.
Restore from clean backups. Before bringing systems back online, restore from backups you’ve verified are clean. Don’t assume your backups are safe. Attackers sometimes compromise backup systems too. Validate integrity before restoration.
Verify the attacker is gone. This is the step companies skip when they’re in a rush to get back online. Attackers often plant backdoors or create secondary access points. Scan every system for persistence mechanisms. Consider bringing in external incident response experts if you don’t have this capability in-house.
Monitor for re-entry. After remediation, increase monitoring on previously compromised systems. Watch for the same attack patterns. Dark web monitoring can tell you if stolen credentials from this breach are being sold or shared, which means attackers may try to use them again.
The breach is contained, systems are restored, notifications are sent. Now comes the part most teams skip – and it’s arguably the most important.
Run a post-incident review. Within two weeks of resolution, gather everyone involved and walk through the entire timeline. What worked? What didn’t? Where did the plan break down? Be honest. This isn’t about blame. It’s about making sure the next breach goes better.
Update your response plan. Every breach reveals gaps in your plan. Maybe the notification process took too long. Maybe nobody knew who was supposed to approve the public statement. Maybe your forensic imaging process was slower than expected. Fix these gaps now while the details are fresh.
Improve your detection capabilities. Ask the hardest question: why didn’t you catch this sooner? If the breach started with stolen credentials, consider adding credential monitoring to catch exposed passwords before they’re used. If detection took months, invest in better logging and alerting.
Brief leadership and the board. Executives need to understand what happened and what it cost. Frame this in terms of risk and dollars, not technical details. Use the cost data from IBM to contextualize your numbers.
Schedule your next tabletop exercise. Don’t wait for the quarterly cycle. Run one within 30 days using a scenario based on what just happened. Test the updated plan and make sure the fixes actually work.
The best time to prepare for a breach is before it happens. If you want to see what employee credentials are already exposed, book a demo to see how Breachsense monitors the dark web for your leaked passwords.
It’s a step-by-step guide your team follows when a breach is detected. It covers containment and investigation, then notification and recovery. Having one ready means you’re not improvising under pressure. IBM found that tested response plans save over $1.5 million per breach.
Contain the breach immediately. Isolate affected systems from the network and disable compromised accounts. Don’t reboot systems – that destroys volatile evidence your forensics team needs. Document everything you do from this point forward.
It depends on your jurisdiction. GDPR requires 72 hours from discovery. The SEC gives public companies four business days. HIPAA allows 60 days. US state laws vary from 30 days to ‘without unreasonable delay.’ See our full guide on data breach notification for details.
At minimum: an incident commander (usually CISO or security lead) and IT/security staff for technical response. You also need legal counsel for regulatory compliance and a communications lead for customer notification. An executive sponsor who can authorize system shutdowns rounds out the team.
Run tabletop exercises at least quarterly. IBM’s data shows that the gap between tested and untested plans is over $1.5 million per breach. Staff changes and notification laws get updated. Annual testing isn’t enough.
Rebooting compromised systems before taking forensic images. This destroys volatile memory that contains evidence of how attackers got in and what they accessed. The second most common mistake is not having a communication plan ready, which leads to inconsistent messaging that makes the situation worse.

Threat Intelligence Cybersecurity Enterprise Security
Why Do You Need a Threat Intelligence Platform? Traditional firewalls can’t protect against stolen passwords. VPNs don’t …

Digital Risk Protection DRP Security Dark Web Monitoring Threat Intelligence Cybersecurity
What is Digital Risk Protection? Most security teams watch their internal networks. DRP platforms watch everywhere else. …