AI-Powered Alert Triage
Use LLMs to summarize, classify, and route alerts faster, while keeping the playbook logic and analyst workflow under your control.
This page is the product view: what AI triage does well, how it fits into the OpenSOAR workflow model, and why it matters. For exact setup and operational details, use docs.opensoar.app.
Why AI belongs in triage
The first step in alert handling is usually interpretation. Analysts read payloads, inspect context, decide whether the signal looks real, and then document their reasoning. That is exactly where modern LLMs are useful.
AI does not replace incident response judgment. It compresses the cost of understanding what the alert means before a human decides what to do next.
What OpenSOAR's AI triage does
- Summarization turns raw payloads into readable analyst context.
- Severity suggestion proposes how urgent the alert likely is.
- Determination classifies alerts as benign, suspicious, or malicious.
- Reasoning provides an explanation that analysts can review.
- Correlation support helps connect similar or related alerts across sources.
Where this model fits best
High-volume alert queues
AI is strongest where analysts burn time on first-pass interpretation and prioritization.
Context compression
Summaries, grouped evidence, and suggested severity reduce time-to-understanding before investigation begins.
Human-in-the-loop routing
The best operating model is confidence-based automation, not blind autonomous response.
How it works inside a playbook
AI triage is not a separate product tier or disconnected chatbot. It is part of the normal OpenSOAR workflow surface, so you can call the built-in AI endpoints and then keep the decision logic in your own playbooks and operator process.
curl -X POST http://localhost:8000/api/v1/ai/triage \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"alert_id":"'$ALERT_ID'"}'
curl -X POST http://localhost:8000/api/v1/ai/summarize \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"alert_id":"'$ALERT_ID'"}' Bring your own model
OpenSOAR is model-agnostic at the deployment level. Use cloud providers when convenience matters, or local inference when data residency and network control matter more.
# OpenSOAR picks the configured provider from your deployment settings:
# ANTHROPIC_API_KEY=...
# OPENAI_API_KEY=...
# OLLAMA_URL=http://localhost:11434
# LLM_MODEL=claude-sonnet-4-6
# Then call the built-in AI endpoints:
curl -X POST http://localhost:8000/api/v1/ai/triage \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"alert_id":"'$ALERT_ID'"}' | Provider | Best for | Data leaves network? |
|---|---|---|
| Claude | Reasoning-heavy triage | Yes |
| OpenAI | Broad model ecosystem | Yes |
| Ollama | Local-only and air-gapped environments | No |
Human-in-the-loop still matters
The useful pattern is confidence-based routing:
- High confidence can resolve noise or escalate clear threats faster.
- Medium confidence should queue to analysts with better context.
- Low confidence should stay human-led.
That is the difference between AI triage and AI theater. The model helps where it is strong and hands off where judgment still matters.
What the analyst experience looks like
500+ failed attempts from 45.33.32.156
C2 callback pattern on port 443
Login from 2 countries in 10 minutes
- Before: analysts read the payload, switch between systems, gather context, and then decide what the alert even means.
- After: the alert arrives with a summary, proposed severity, supporting context, and a suggested path forward.
Privacy and deployment model
- AI calls go from your deployment to your chosen provider.
- Use Ollama for fully local inference.
- Keep the decision logic in your own playbooks.
- Audit the results in your own system.
Read next
One command. No credit card.
Apache 2.0 licensed. Self-host on your infrastructure. No feature gates, no per-action billing, no vendor lock-in. Your playbooks are yours.
curl -fsSL https://opensoar.app/install.sh | sh