
Levels of Threat Intelligence: Why Most Programs Fail
Threat Intelligence Cybersecurity CTI Security Operations
What Do We Mean by Levels of Threat Intelligence? Cyber threat intelligence (CTI) isn’t one thing. It’s three layers …

Learn how insider threats work and how to detect them before they cause damage.
• Most insider threats aren’t intentional. Stolen credentials and careless mistakes cause more damage than disgruntled employees.
• Compromised insiders (employees whose credentials were stolen without them knowing) are the fastest-growing type of insider threat. Stealer logs even capture session tokens that bypass MFA.
• Malicious insider attacks cost $4.92 million on average and take 287 days to identify and contain.
• Your UEBA and DLP tools won’t catch compromised insiders. You need external credential monitoring for threats that look like normal logins.
Malicious insider attacks cost $4.92 million on average. That’s higher than phishing or social engineering as initial attack vectors (IBM Cost of a Data Breach Report 2025).
The challenge? These threats come from people who already have access. Employees and contractors with legitimate credentials don’t need to hack their way in. They’re already inside.
One in three attacks now use valid credentials as the initial access vector (IBM X-Force Threat Intelligence Index 2025). Many of those credentials were stolen without the employee ever knowing.
This guide covers what insider threats are, the five types you need to watch for, real examples, and how to catch them before they cause real damage.
Most security tools focus on keeping attackers out. Firewalls, intrusion detection, perimeter defenses. But what happens when the threat is already inside?
An insider threat is a security risk that originates from someone with authorized access to your systems or data. This includes employees and contractors who intentionally or unintentionally misuse their access to cause harm. CISA defines it as “the potential for an insider to use their authorized access to do harm.”
You’ll also hear this called “insider risk” or “insider risk management.” Same concept, different label. CISA uses “insider threat” as the standard term.
That’s what makes insider threats different from external attacks. You can’t block them at the firewall. They don’t need to exploit vulnerabilities. They’re already authenticated.
Some industries deal with higher insider risk than others. Financial services and healthcare handle sensitive data that makes them prime targets. Manufacturing and pharmaceutical companies are targets for intellectual property theft. If you hold customer data or trade secrets, insider threat management should be a priority.
Once you understand the types of insider threats, you can build the right defenses. What works against a malicious insider won’t stop a compromised credential attack.
Insider threats fall into five categories. Each has different motivations and requires different detection methods. Some frameworks simplify this into two types of insider threats (intentional vs. unintentional), but five categories give you a more useful picture.
Also called intentional insider threats. These individuals deliberately exploit their access. Financial gain drives most cases, but revenge and corporate espionage also play roles.
Picture a database admin who’s been passed over for promotion twice. They start exporting customer records to a personal drive. They know which tables hold the most valuable data. They know which exports won’t trigger alerts. That knowledge makes them far more dangerous than any external attacker.
Malicious insider attacks cost $4.92 million on average and take 287 days to identify and contain (IBM Cost of a Data Breach Report 2025). For more on financial impact and real cases, see our guide on insider data breaches.
The most common category, and the classic unintentional insider threat. These employees don’t intend harm but create security risks through careless mistakes.
Think of an engineer who reuses the same password across personal and work accounts. When a personal account gets breached, attackers try those credentials against corporate systems. The employee never meant to create a risk. Their password hygiene did it for them.
Other common examples: falling for phishing attacks or misconfiguring cloud storage. Sending sensitive files to personal email counts too. One negligent mistake can expose millions of records.
This is the fastest-growing category, and the one most teams underestimate.
External attackers obtain employee credentials through phishing or infostealer malware. The employee has no idea their account is being used.
A compromised insider is an employee, contractor, or partner whose credentials have been stolen by external attackers. The person doesn’t know their account is being used maliciously. From your security logs, it looks like legitimate access.
IBM X-Force reports that one in three attacks now use valid credentials as the initial access vector. Infostealer delivery increased 84% via phishing in 2024. These malware variants steal credentials and cookies, plus autofill data. That gives attackers everything they need to impersonate your employees.
Compromised insiders are the hardest to detect because the credentials are real. The access patterns may look routine. Only external threat intelligence reveals the compromise.
Contractors and vendors often have privileged access with less oversight than employees. When they get breached, your data goes with them.
Consider a managed IT provider with VPN access to 50 client networks. One phishing email compromises an admin’s credentials. Now the attacker has a direct path into every client. That’s not a hypothetical. It’s how major supply chain attacks work.
Third-party risk management requires the same attention as internal employee monitoring, but many teams treat vendor access as an afterthought.
Insiders who partner with external attackers for coordinated attacks. An employee might provide credentials or disable security controls in exchange for payment.
These threats combine insider knowledge with external attack capabilities. The insider knows what your security tools monitor and helps attackers avoid triggering alerts.
| Type | Motivation | How to Detect | Real Example |
|---|---|---|---|
| Malicious | Financial gain, revenge | UEBA, DLP, access logging | Cash App 2022: departing employee downloaded 8.2M customer records |
| Negligent | None (carelessness) | Phishing simulations, training metrics | Twitter 2020: employees fell for phone-based social engineering |
| Compromised | External attacker using stolen creds | Dark web monitoring, credential checks | Anthem 2015: phished credentials led to 78.8M records stolen |
| Third-party | Varies (attacker exploits vendor access) | Vendor access audits, credential monitoring | Capital One 2019: ex-AWS employee exploited cloud access |
| Collusive | Financial gain (insider recruited) | Behavioral anomalies, reporting channels | Tesla 2020: $1M offered to employee to plant malware |
Most security tools are built to catch external attacks. Firewalls and endpoint protection assume the attacker is coming from outside. Insider threats flip that assumption.
When someone with valid credentials accesses a system, it looks normal. There’s no malware signature to catch. No brute-force login attempt to flag. The insider threat behavior blends in with everyday work.
That’s why insider threats take so long to contain. According to Ponemon Institute research, the average insider incident takes 77 days to contain. During that time, attackers can move laterally and exfiltrate data without triggering alerts.
Insider threat detection gets even harder with compromised insiders. Your monitoring tools see a real employee logging in from a plausible location with valid credentials. Nothing looks wrong until you check external sources and find those credentials for sale on a dark web market.
Traditional insider threat programs focus on malicious and negligent insiders. But compromised insiders are growing faster than both.
Here’s why. Stealer logs have changed the economics of credential theft. Infostealer malware installed on a device grabs passwords in plaintext before they’re encrypted. It captures everything saved in the browser, from passwords to session cookies. Even strong, unique passwords get stolen this way.
These logs get posted to dark web markets within hours of the infection.
The session token angle is even worse. Stealer logs don’t just grab passwords. They capture active session cookies. If an attacker has a valid session token, they bypass both the login page and MFA. They’re already authenticated. No password reset will kick them out unless you also revoke the session.
This is why credential monitoring matters as much as internal monitoring. Your UEBA tool won’t flag anything unusual when the “employee” is logging in with real credentials from a plausible location. Only dark web monitoring catches these credentials before attackers use them.
Attackers compromised Twitter’s internal systems by targeting employees with phone-based social engineering. They posed as IT support, convincing remote employees to enter credentials on a phishing site.
Once inside, they used Twitter’s internal tools to take over high-profile accounts including Barack Obama and Elon Musk. They posted cryptocurrency scams that netted over $100,000 in Bitcoin.
Lesson: Remote work expanded the attack surface. Employees who would have questioned an in-person request fell for phone-based social engineering.
A former Cash App employee downloaded reports containing customer data after leaving the company. The reports included full names, brokerage account numbers, and stock trading activity for 8.2 million customers.
Block (Cash App’s parent company) discovered the breach months later. The delay came from not revoking access when the employee left.
Lesson: Departing employees need immediate access revocation. This breach was entirely preventable with proper offboarding.
A Russian national offered a Tesla employee $1 million to install malware on the company’s network. The plan: deploy ransomware, steal data, and extort Tesla.
The employee reported the attempt to Tesla, who contacted the FBI. The attacker was arrested.
Lesson: External attackers actively recruit insiders. This attack failed because the employee had clear reporting channels and used them.
A former AWS employee exploited a misconfigured web application firewall to access Capital One’s cloud environment. She extracted data from over 100 million customer accounts.
Capital One paid $80 million in regulatory fines and $190 million in a class-action settlement.
Lesson: Third-party access requires the same monitoring as internal access. The attacker had insider knowledge of AWS infrastructure from previous employment.
Insider threats leave traces in two main areas: behavioral and technical signals (unusual access patterns, excessive downloads, login anomalies). But there’s a third area most teams miss. External indicators like credentials showing up on dark web markets reveal compromised insiders that internal tools can’t see.
Behavioral and technical indicators catch malicious and negligent insiders. Compromised insiders are the gap most programs have. For a complete breakdown of what to watch for, see our detailed guide on insider threat indicators.
Start with the basics for insider threat mitigation. Least privilege access limits what any single compromised account can reach. Strong authentication slows down credential theft. Insider threat awareness training reduces negligent incidents.
But prevention alone isn’t enough. You need detection tools that catch what slips through.
Internal tools like UEBA and DLP establish behavioral baselines and track data movement. Credential monitoring catches compromised insiders that internal tools miss entirely, alerting you when employee credentials appear in breach data or stealer logs.
Neither approach works well in isolation. When external intelligence reveals compromised credentials, your internal monitoring should immediately increase scrutiny on those accounts. When UEBA flags anomalous behavior, check whether those credentials have appeared in recent breaches.
For a comparison of detection platforms, see our guide to insider threat detection tools. For step-by-step prevention strategies covering access controls and offboarding procedures, see our insider threat prevention guide.
An insider threat program coordinates detection and response across your organization. Without one, you’re relying on ad hoc responses that create inconsistency and legal risk.
The goal is simple: catch insider threats early and respond in a way that’s consistent and defensible. CISA’s insider threat mitigation guide lays out the framework most organizations follow.
A good program pulls together security, HR, and legal. Security handles insider threat monitoring and detection. HR has access to employment records and performance history. Legal makes sure investigations hold up. Each function has a role.
You don’t need to build everything at once. Start with the basics: define who monitors for insider threat indicators, who decides when to escalate, and who conducts investigations. Add insider threat awareness training so employees know how to report suspicious behavior. Then layer in detection tools and external credential monitoring over time.
The programs that fail are the ones that skip steps. Deploying insider threat detection software without investigation procedures creates alerts nobody acts on. Training without policies gives employees nothing to follow.
Insider threats aren’t just disgruntled employees stealing data. Compromised credentials and employee negligence create risks that traditional security tools miss.
The numbers tell the story. A third of attacks use valid credentials. Infostealer delivery increased 84% in 2024. Compromised insiders are growing faster than any other category, and they’re the hardest to catch without external intelligence.
Check your organization’s dark web exposure to find compromised credentials before attackers use them.
You often can’t from logs alone. Both look like legitimate access. The difference is intent. A malicious insider knows what they’re doing. A compromised insider has no idea their account is being used. Dark web monitoring helps you tell them apart by flagging stolen credentials before you start investigating the account activity.
Negligent insiders are the most common type. These employees don’t intend harm but create security risks through careless actions like clicking phishing links or reusing passwords across personal and work accounts.
MFA stops some credential-based attacks, but not all. Stealer logs capture session tokens that bypass MFA entirely. The attacker doesn’t need the password or the second factor. They already have an authenticated session. That’s why credential monitoring matters alongside MFA.
Revoke all active sessions for the affected account immediately. A password reset alone isn’t enough if session tokens were stolen. Then investigate the source. If the credentials came from a stealer log, the device is infected and needs to be isolated.
Vendor insider threats are harder to detect because you have less visibility into their activity. They often have privileged access with less monitoring than employees. When a vendor gets breached, your data leaks through their systems. Third-party risk management requires the same attention as internal monitoring.
The goal is to detect insider threats early and respond consistently. A formal program coordinates security, HR, and legal so investigations are documented and defensible. CISA’s framework is the standard most organizations follow.
Malicious insider attacks take 287 days on average to identify and contain (IBM Cost of a Data Breach Report 2025). That’s longer than most external attacks. Continuous credential monitoring shortens detection time by catching compromised accounts as soon as they appear in breach data.

Threat Intelligence Cybersecurity CTI Security Operations
What Do We Mean by Levels of Threat Intelligence? Cyber threat intelligence (CTI) isn’t one thing. It’s three layers …

Authentication Data Breach Credential Monitoring
What Does It Mean When Your Passwords Are Leaked? Your credentials can leak without your company doing anything wrong. A …