Dependency Vulnerabilities and Deployment Risk: What Most Teams Miss
Most engineering teams scan dependencies for vulnerabilities. Most of them treat vulnerability scanning as a separate concern from deployment risk. The teams that have the lowest security incident rates do something different: they factor known vulnerabilities into their deployment risk assessment at merge time, not as a separate security audit that happens on a different cadence from the deployment pipeline.
The timing gap
The average time between a CVE being published and an organization deploying a fix is 60 days. But 40% of exploitation attempts occur within the first 30 days of CVE publication. Dependency vulnerability scanning that is not connected to the deployment pipeline creates a false sense of coverage during the highest-risk window.
The Problem with Scanning-as-Audit
The typical dependency scanning setup: a tool like Snyk, Dependabot, or OWASP Dependency-Check runs on a schedule (nightly, weekly) and creates issues or PRs when new vulnerabilities are found. Security reviews these findings, triages them by severity, and creates remediation tickets that go into the engineering backlog.
This setup has two critical failure modes from a deployment risk perspective:
Vulnerability-to-fix lag. The backlog creates lag. High-severity vulnerabilities may sit for days or weeks between discovery and fix deployment. During that window, every production deployment is a deployment into a system with known exploitable vulnerabilities. The deployment pipeline has no awareness of this context.
New vulnerability introduction. When a PR introduces a new dependency or upgrades an existing one, it may introduce known vulnerabilities that did not exist in the prior dependency set. Scanning at PR time catches this. Scanning on a schedule may catch it in the next scan cycle — after the dependency is already in production.
CVSS vs. EPSS: Scoring Vulnerabilities for Deployment Impact
The Common Vulnerability Scoring System (CVSS) is the standard severity metric for CVEs — a 0–10 score that rates the theoretical impact of a vulnerability. CVSS 7+ is "High," CVSS 9+ is "Critical." Most teams block on CVSS Critical by default.
The problem with CVSS as a deployment gate is that it does not measure exploitability in practice. A CVSS 9.8 vulnerability in a library your application uses for an internal utility function that is never exposed to network traffic is very different from a CVSS 7.5 vulnerability in your authentication middleware that is called on every request.
The Exploit Prediction Scoring System (EPSS) was developed by FIRST to address this. EPSS produces a probability score (0–1) that a given CVE will be exploited in the wild within the next 30 days, based on threat intelligence, weaponization data, and ecosystem signals. A CVE with CVSS 9.8 but EPSS 0.01 (1% exploitation probability) is categorically different from a CVE with CVSS 7.2 and EPSS 0.89 (89% exploitation probability).
| CVSS | EPSS | Risk Assessment | Deploy Recommendation |
|---|---|---|---|
| 9.8 (Critical) | 0.01 (1%) | High theoretical, low practical risk | Warn, track remediation timeline |
| 9.8 (Critical) | 0.89 (89%) | Critical — actively exploited in the wild | Block non-remediation deploys |
| 7.5 (High) | 0.45 (45%) | Elevated practical risk — watch closely | Require explicit security sign-off |
| 5.5 (Medium) | 0.02 (2%) | Standard backlog item | Normal deploy, track remediation |
Integrating Vulnerability Data into the Deploy Pipeline
The implementation has two phases: scanning at PR time (to catch new vulnerability introductions) and ongoing monitoring (to surface when the vulnerability context changes for an existing dependency).
Phase 1: Scanning at PR Time
# GitHub Actions: Scan dependencies on PR and fail on high-EPSS vulnerabilities
name: Dependency Vulnerability Scan
on:
pull_request:
paths:
- 'package.json'
- 'package-lock.json'
- 'requirements.txt'
- 'Pipfile.lock'
- 'go.sum'
- 'Gemfile.lock'
jobs:
vulnerability-scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run Snyk vulnerability scan
uses: snyk/actions/node@master
env:
SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}
with:
args: --severity-threshold=high --json-file-output=snyk-output.json
continue-on-error: true # Don't fail here — we check EPSS below
- name: Check EPSS scores for high-severity CVEs
run: |
python3 scripts/check_epss.py snyk-output.json
env:
EPSS_BLOCK_THRESHOLD: "0.5" # Block if EPSS > 50%# scripts/check_epss.py
import json
import sys
import requests
EPSS_BLOCK_THRESHOLD = float(os.environ.get("EPSS_BLOCK_THRESHOLD", "0.5"))
def get_epss_scores(cve_ids: list[str]) -> dict[str, float]:
"""Fetch EPSS scores from the FIRST API."""
if not cve_ids:
return {}
cve_param = ",".join(cve_ids)
resp = requests.get(f"https://api.first.org/data/1.0/epss?cve={cve_param}")
scores = {}
for item in resp.json().get("data", []):
scores[item["cve"]] = float(item["epss"])
return scores
def main():
snyk_output_path = sys.argv[1]
with open(snyk_output_path) as f:
snyk_data = json.load(f)
# Extract CVE IDs from Snyk output
cve_ids = []
for vuln in snyk_data.get("vulnerabilities", []):
cve_ids.extend(vuln.get("identifiers", {}).get("CVE", []))
epss_scores = get_epss_scores(cve_ids)
blocking_vulns = []
for vuln in snyk_data.get("vulnerabilities", []):
for cve_id in vuln.get("identifiers", {}).get("CVE", []):
epss = epss_scores.get(cve_id, 0)
if epss >= EPSS_BLOCK_THRESHOLD:
blocking_vulns.append({
"cve": cve_id,
"epss": epss,
"package": vuln.get("packageName"),
"severity": vuln.get("severity"),
})
if blocking_vulns:
print("BLOCKING: High-EPSS vulnerabilities detected:")
for v in blocking_vulns:
print(f" {v['cve']} (EPSS: {v['epss']:.0%}) in {v['package']}")
sys.exit(1)
else:
print("PASS: No high-EPSS vulnerabilities found.")
if __name__ == "__main__":
main()Phase 2: Ongoing Monitoring
Because EPSS scores change as threat intelligence evolves, a dependency that was safe to deploy last week may become high-EPSS this week due to new exploitation activity. Ongoing monitoring should re-check EPSS scores for all production dependencies on a daily basis and alert when any dependency crosses the blocking threshold.
This monitoring creates a different kind of deployment alert: "Your current production deployment of service X contains CVE-2026-XXXX which has crossed the EPSS threshold — schedule a remediation deployment." This alert has a direct path to action (deploy the fix) rather than requiring triage before action.
When Vulnerabilities Should Block a Deployment
Not every vulnerability should block every deployment. A vulnerability in a dependency you are not introducing or modifying is a backlog item, not a deployment blocker. A vulnerability introduced by this specific PR — a new dependency or an upgraded one that has known CVEs — is more appropriately treated as a deployment blocker.
The clearest blocking policy: a PR that introduces a dependency with CVSS > 9.0 AND EPSS > 0.5 should be blocked at merge. This combination (theoretically severe and practically exploited) is the highest-risk scenario and the one most likely to result in a security incident within 30 days of deployment.
For vulnerabilities in existing dependencies (not introduced by this PR), the appropriate mechanism is a separate monitoring alert that triggers a prioritized remediation ticket — not a blanket block on unrelated deployments.
Koalr factors vulnerability signals into deploy risk scores
Koalr integrates with Snyk and Dependabot to incorporate dependency vulnerability data — weighted by EPSS exploitability — into the deploy risk score for every PR. High-EPSS vulnerability introductions automatically elevate the PR risk score and can trigger blocking check runs.
Add vulnerability signals to your deploy risk score
Koalr connects vulnerability scan data to deploy risk scoring — surfacing high-EPSS CVEs in the PR before merge so your team can remediate before deployment, not after. Connect GitHub in 5 minutes.