AI

AI-Powered Alert Triage

Use LLMs to summarize, classify, and route alerts faster, while keeping the playbook logic and analyst workflow under your control.

This page is the product view: what AI triage does well, how it fits into the OpenSOAR workflow model, and why it matters. For exact setup and operational details, use docs.opensoar.app.

Why AI belongs in triage

The first step in alert handling is usually interpretation. Analysts read payloads, inspect context, decide whether the signal looks real, and then document their reasoning. That is exactly where modern LLMs are useful.

AI does not replace incident response judgment. It compresses the cost of understanding what the alert means before a human decides what to do next.

What OpenSOAR's AI triage does

  • Summarization turns raw payloads into readable analyst context.
  • Severity suggestion proposes how urgent the alert likely is.
  • Determination classifies alerts as benign, suspicious, or malicious.
  • Reasoning provides an explanation that analysts can review.
  • Correlation support helps connect similar or related alerts across sources.

Where this model fits best

High-volume alert queues

AI is strongest where analysts burn time on first-pass interpretation and prioritization.

Context compression

Summaries, grouped evidence, and suggested severity reduce time-to-understanding before investigation begins.

Human-in-the-loop routing

The best operating model is confidence-based automation, not blind autonomous response.

How it works inside a playbook

AI triage is not a separate product tier or disconnected chatbot. It is part of the normal OpenSOAR workflow surface, so you can call the built-in AI endpoints and then keep the decision logic in your own playbooks and operator process.

ai_triage_api.sh
curl -X POST http://localhost:8000/api/v1/ai/triage \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"alert_id":"'$ALERT_ID'"}'

curl -X POST http://localhost:8000/api/v1/ai/summarize \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"alert_id":"'$ALERT_ID'"}'

Bring your own model

OpenSOAR is model-agnostic at the deployment level. Use cloud providers when convenience matters, or local inference when data residency and network control matter more.

ai_provider_config.sh
# OpenSOAR picks the configured provider from your deployment settings:
# ANTHROPIC_API_KEY=...
# OPENAI_API_KEY=...
# OLLAMA_URL=http://localhost:11434
# LLM_MODEL=claude-sonnet-4-6

# Then call the built-in AI endpoints:
curl -X POST http://localhost:8000/api/v1/ai/triage \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"alert_id":"'$ALERT_ID'"}'
Provider Best for Data leaves network?
Claude Reasoning-heavy triage Yes
OpenAI Broad model ecosystem Yes
Ollama Local-only and air-gapped environments No

Human-in-the-loop still matters

The useful pattern is confidence-based routing:

  • High confidence can resolve noise or escalate clear threats faster.
  • Medium confidence should queue to analysts with better context.
  • Low confidence should stay human-led.

That is the difference between AI triage and AI theater. The model helps where it is strong and hands off where judgment still matters.

What the analyst experience looks like

Brute Force SSH Login

500+ failed attempts from 45.33.32.156

AI analyzing alert…
Classified as Brute Force Attack
MITRE ATT&CK: Credential Access · T1110.001
Severity raised from medium → high
AbuseIPDB lookup: 98% malicious confidence
Correlated with 12 related events from same IP
Recommended: Block IP + escalate to on-call
Auto-triaged in 0.8s
Playbook triggered — IP blocked, SOC notified via Slack
highElastic SIEM · just now
Cobalt Strike Beacon

C2 callback pattern on port 443

AI analyzing alert…
Classified as C2 Communication
MITRE ATT&CK: Command & Control · T1071.001
JA3 fingerprint matches known Cobalt Strike signature
Beacon interval: 60s with 25% jitter — classic CS profile
Correlated with 2 lateral movement alerts on same host
Recommended: Isolate endpoint + initiate IR
Auto-triaged in 1.1s
Host isolated, IR playbook triggered, Slack alert sent
criticalCrowdStrike · 5s ago
Impossible Travel

Login from 2 countries in 10 minutes

AI Triage Complete
Classified as Account Compromise
MITRE ATT&CK: Valid Accounts · T1078
Distance: NYC → Lagos — 6,870 km in 10 min
VPN analysis: No corporate VPN detected
User risk score: 78/100 — first impossible travel event
Recommended: Force re-auth + require MFA
Auto-triaged in 1.4s
Session revoked, MFA re-enrollment triggered
mediumAzure AD · 12s ago 1.4s
  • Before: analysts read the payload, switch between systems, gather context, and then decide what the alert even means.
  • After: the alert arrives with a summary, proposed severity, supporting context, and a suggested path forward.

Privacy and deployment model

  • AI calls go from your deployment to your chosen provider.
  • Use Ollama for fully local inference.
  • Keep the decision logic in your own playbooks.
  • Audit the results in your own system.

Read next

One command. No credit card.

Apache 2.0 licensed. Self-host on your infrastructure. No feature gates, no per-action billing, no vendor lock-in. Your playbooks are yours.

$curl -fsSL https://opensoar.app/install.sh | sh
GitHub