AI-Powered Alert Triage
Let AI handle the noise. Classify, summarize, and prioritize hundreds of alerts per day — from any source, not just security tools.
Why AI for security alerts?
Security analysts spend most of their day on repetitive triage work: reading alerts, looking up context, deciding if something is real or noise. Most alerts are false positives. The real threats hide in the volume.
AI doesn't replace your analysts — it gives them superpowers. By handling the initial assessment of every alert, AI lets your team focus on the 10% that actually matter.
What OpenSOAR's AI triage does
- Alert summarization — turns raw JSON payloads into natural language: "Brute-force login attempt from IP 45.33.x.x targeting admin account, 47 failed attempts in 3 minutes"
- Severity assessment — suggests severity based on alert content, asset criticality, and enrichment data
- Determination — classifies as benign, suspicious, or malicious with a confidence score
- Reasoning — explains the assessment step by step, so analysts can verify the logic
- Deduplication — identifies semantically similar alerts that should be grouped, not just exact hash matches
- Correlation — connects related alerts across different sources into a single incident
How it works
AI triage is a playbook action — just another step in your automation. Call it from any playbook, configure thresholds, and decide what happens at each confidence level.
from opensoar import playbook
from opensoar.ai import triage
@playbook(trigger="alert.created")
async def ai_assisted_triage(alert):
# AI analyzes the alert and returns structured assessment
assessment = await triage(
alert,
provider="claude", # or "gemini", "openai", "ollama"
)
# assessment.summary → natural language explanation
# assessment.severity → suggested severity level
# assessment.determination → benign / suspicious / malicious
# assessment.confidence → 0-100 confidence score
# assessment.reasoning → step-by-step analysis
if assessment.confidence > 85 and assessment.determination == "benign":
await resolve(alert, determination="false_positive",
note=assessment.summary)
elif assessment.determination == "malicious":
await escalate(alert, reason=assessment.reasoning)
else:
# Enrich alert with AI context for analyst review
alert.ai_summary = assessment.summary
alert.suggested_severity = assessment.severity
await save_alert(alert) The AI sees the full alert context: raw payload, enrichment data from VirusTotal/AbuseIPDB, asset information, historical patterns. It returns a structured assessment your playbook can act on.
Bring your own model
OpenSOAR is model-agnostic. Use whichever LLM fits your requirements — cloud providers for convenience or local models for data sovereignty.
from opensoar.ai import triage, configure_provider
# Use any LLM provider — or run locally with Ollama
configure_provider("claude", api_key="sk-...")
configure_provider("gemini", api_key="AIza...")
configure_provider("ollama", base_url="http://localhost:11434",
model="llama3.3")
# Switch providers per-playbook or globally
assessment = await triage(alert, provider="claude")
assessment = await triage(alert, provider="ollama") # fully local, no data leaves your network | Provider | Best for | Data leaves network? |
|---|---|---|
| Claude (Anthropic) | Complex reasoning, nuanced analysis | Yes (Anthropic API) |
| Gemini (Google) | Speed, Google Cloud integration | Yes (Google API) |
| OpenAI | Broad model ecosystem, function calling | Yes (OpenAI API) |
| Ollama (local) | Air-gapped, compliance, no data egress | No — fully local |
Human-in-the-loop
AI triage doesn't mean autonomous triage. You control the thresholds:
- High confidence (85%+) — auto-resolve false positives, auto-escalate confirmed threats
- Medium confidence (50-85%) — queue for analyst review with AI summary and recommendation
- Low confidence (<50%) — flag for manual triage, AI provides context but no action
Every AI decision is logged with the full reasoning chain, so you can audit, adjust thresholds, and build trust over time.
What AI triage looks like for your SOC
500+ failed attempts from 45.33.32.156
C2 callback pattern on port 443
Login from 2 countries in 10 minutes
- Before: Analyst opens alert → manually copies IOCs → checks VirusTotal → checks AbuseIPDB → reads SIEM context → decides severity → documents decision → 15-20 minutes per alert
- After: Alert arrives → AI enriches + analyzes → analyst sees summary + recommendation → confirms or overrides → 30 seconds per alert
That's not 10% faster. That's 30x faster. Your team handles the same volume with a fraction of the effort.
Privacy and security
We take data handling seriously:
- OpenSOAR never sends your data to our servers — AI calls go directly from your deployment to your chosen provider
- Use Ollama for fully local inference — no data leaves your network
- All AI interactions are logged in your database for audit
- You control exactly which alert fields are sent to the AI
AI-powered alert triage is built into OpenSOAR. Get started on GitHub →
One command. No credit card.
Apache 2.0 licensed. Self-host on your infrastructure. No feature gates, no per-action billing, no vendor lock-in. Your playbooks are yours.
curl -fsSL https://opensoar.app/install.sh | sh