[Oisf-users] number of alerts versus performance
Yasha Zislin
coolyasha at hotmail.com
Thu Jun 30 14:41:49 UTC 2016
I have been trying to figure out a packet loss on one of my sensors and I am puzzled.
It is has 16 gigs of RAM, one quad core AMD CPU, and nic sees about 3 million packets per minute. Nothing special in my mind. I am using PFRING 6.5.0 and Suricata 3.1.
I get about 20% to 40% packet loss. I have another identical server which sees the same amount of traffic and maybe some of the same traffic as well.
I've been messing around with NIC settings, IRQs, PFRING settings, Suricata settings trying to figure out why such a high packet loss.
I have just realized one big difference in these two sensors. Problematic one gets 2k to 4k of alerts per minute which sounds huge.
Second one gets like 80 alerts per minute. Both have the same rulesets.
The difference of course is the home_net variable.
Can the fact that Suricata processes more rules due to HOME_NET definition cause high performance strain on the server?
If the packet does not match per HOME_NET, it will be discarded before being processed in rules. Correct?
Versus if packet passes HOME_NET check, it would have to go through all of the rules, hence cause higher CPU utilization.
Thank you for the clarification.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20160630/d9fc07e3/attachment.html>
More information about the Oisf-users
mailing list