The problem: 1,200 alerts a day
When I inherited the Wazuh deployment, the dashboard showed over 1,200 alerts per day. The team had learned to ignore the SIEM. When something real came in, it looked identical to the 40 false positives around it. The system was technically running — just not actually detecting anything useful.
The goal: reduce noise so that the alerts that do fire are worth reading. Four weeks, no new hardware, no Wazuh upgrade. Just rule and decoder work.
How Wazuh rules actually work
Wazuh processes log events through a pipeline: decoders parse raw log lines into structured fields, then rules match against those fields and fire alerts. Rules can reference parent rules, group related events, and override each other via overwrite.
Most out-of-the-box noise comes from three places:
- Overly broad decoders that match too many log sources at once
- Rules with no frequency threshold that fire on every single occurrence
- Duplicate rule paths where the same event triggers 3–4 rules in sequence
Phase 1 — audit what's actually firing
Before changing anything, I needed to know which rules were generating the most volume. Wazuh's built-in dashboards in Kibana/OpenSearch show this, but pulling a raw count from the alerts log is faster:
The top 5 rule IDs accounted for 60% of all alerts. Three were syslog authentication rules firing on routine sudo usage. One was a web server access log rule with no frequency gate. One was a Windows event rule matching every user logon.
Phase 2 — local rule overrides
Wazuh's default rules live in /var/ossec/ruleset/rules/. You never edit these — upgrades overwrite them. The right place is /var/ossec/etc/rules/local_rules.xml, where you can use overwrite="yes" to replace a default rule's behaviour.
Adding frequency="10" timeframe="120" to the SSH auth failure rule collapsed hundreds of daily alerts into a handful of genuine brute-force detections.
Phase 3 — decoder tightening
Two decoders were matching log sources they shouldn't. A generic syslog decoder was picking up application logs that had their own specific decoder. The fix is explicit program_name matching so the generic decoder only runs when nothing else claims the log line first.
Results and what changed
After four weeks of iterative tuning, daily alert volume dropped from 1,200 to around 360. More importantly, the signal-to-noise ratio flipped — the team started reading alerts again because firing an alert actually meant something.
- Frequency gates on authentication rules reduced sudo/logon noise by 85%
- Decoder scoping eliminated double-firing on nginx access logs
- Rule level adjustment pushed informational events below the dashboard threshold
- Group-based suppression collapsed related alerts (failed login → brute force) into single parent events