EPSS vs CVSS: Why Severity Score Alone Gets Patching Wrong
If you've ever sorted a vulnerability scan report by CVSS score and started patching from the top, you're in good company. It's the default workflow at most organizations. It's also a fundamentally flawed approach to risk reduction.
The problem isn't that CVSS is wrong. It's that CVSS answers the wrong question. CVSS tells you how bad a vulnerability could be if exploited. It doesn't tell you how likely it is to actually be exploited. That distinction is the difference between patching theater and genuine risk reduction.
What CVSS actually measures
CVSS (Common Vulnerability Scoring System) evaluates vulnerabilities on a 0-10 scale based on technical characteristics: attack vector, complexity, privileges required, user interaction, and impact on confidentiality, integrity, and availability.
A CVSS 9.8 means: if someone exploits this, the damage potential is very high, and the attack is easy to execute. It does not mean the vulnerability is likely to be exploited. Thousands of CVSS 9.0+ vulnerabilities exist that have never been exploited in the wild and likely never will be.
The NVD currently contains over 250,000 CVEs. Roughly 20,000 of those have a CVSS score of 9.0 or higher. No organization can patch 20,000 critical vulnerabilities simultaneously, so triaging purely by CVSS means you're spreading your effort across thousands of vulnerabilities with equal theoretical severity but vastly different real-world risk.
What EPSS measures — and why it matters
EPSS (Exploit Prediction Scoring System) takes a fundamentally different approach. Developed by FIRST (the Forum of Incident Response and Security Teams), EPSS uses machine learning to estimate the probability that a vulnerability will be exploited in the wild within the next 30 days.
The model considers hundreds of features: whether exploit code exists, social media mentions, the age of the vulnerability, the type of weakness (CWE), whether similar vulnerabilities have been exploited, and many more signals. EPSS scores are updated daily, reflecting the evolving threat landscape.
An EPSS score of 0.85 means: there is an 85% probability this vulnerability will be exploited in the next 30 days. An EPSS score of 0.02 means: there is a 2% chance. This is actionable intelligence.
The mismatch in practice
Here's where things get concrete. Consider two real scenarios:
Vulnerability A: CVSS 9.8, EPSS 0.01
A theoretical remote code execution in an obscure library. No known exploit code. No active campaigns. CVSS says "critical." EPSS says "1% chance of exploitation."
Vulnerability B: CVSS 6.5, EPSS 0.87
A privilege escalation in a widely-deployed authentication library. Exploit code is public. Active exploitation detected. CVSS says "medium." EPSS says "87% chance of exploitation."
If you're sorting by CVSS, you patch Vulnerability A first. But Vulnerability B is the one that's actually going to hurt you. CISA's Known Exploited Vulnerabilities (KEV) catalog is full of medium-severity CVEs that caused real breaches because organizations deprioritized them based on CVSS alone.
CISA KEV: the ground truth
The CISA KEV catalog provides a third lens: confirmed active exploitation. When a CVE appears on the KEV list, it's not theoretical. It's being used in attacks right now. Federal agencies are mandated to patch KEV entries within specific deadlines (typically 2-3 weeks).
As of early 2026, the KEV catalog contains around 1,200 CVEs. The median CVSS score of KEV entries is approximately 8.1 — high, but not exclusively 9.0+. Roughly 15% of KEV entries have a CVSS score below 7.5. These are vulnerabilities that a CVSS-only approach would deprioritize, despite being actively exploited.
A better prioritization framework
The most effective approach combines all three signals:
| Priority | Criteria | Action |
|---|---|---|
| P0 | On CISA KEV list | Patch immediately (within days) |
| P1 | EPSS ≥ 0.5 (50%+ exploitation probability) | Patch this sprint |
| P2 | EPSS ≥ 0.1 OR CVSS ≥ 9.0 | Patch this cycle |
| P3 | Everything else | Scheduled maintenance |
This framework means you might patch a CVSS 6.5 KEV entry before a CVSS 9.8 with no known exploit. That feels counterintuitive if you're used to CVSS-first sorting, but it maps to actual risk far more accurately.
Practical implementation
The challenge with this approach is operational. You need to cross-reference three data sources (NVD for CVSS, FIRST for EPSS, CISA for KEV) for every vulnerability in your environment. Doing this manually in spreadsheets doesn't scale.
This is exactly why we built VulnXplorer. You map your software stack, and it automatically pulls CVEs, flags KEV entries, ranks by EPSS probability, and generates a prioritized remediation order. You can try the EPSS Score Checker to look up exploitation probability for any CVE, or search for CVEs affecting your stack.
Key takeaways
- CVSS measures potential impact. EPSS measures exploitation likelihood. They answer different questions.
- Sorting by CVSS alone means you're treating thousands of vulnerabilities as equally urgent when they're not.
- CISA KEV entries are ground truth — a vulnerability on the KEV list is being actively exploited regardless of its CVSS score.
- The most effective prioritization combines all three: KEV status first, then EPSS probability, then CVSS severity as a tiebreaker.
- EPSS scores change daily. A vulnerability that was low-risk last month may be high-risk today if exploit code is published.
Try it yourself
Look up the EPSS score for any CVE with our free checker, or map your full stack to see prioritized remediation guidance.