[Oisf-users] Workers vs AutoFP with more than 32 cores
tritium.cat at gmail.com
Sun Sep 1 12:58:14 UTC 2013
On Sun, Sep 1, 2013 at 1:26 AM, Peter Manev <petermanev at gmail.com> wrote:
> Can you try listening only on one ? see if it would make a difference?
I already tried that while setting up this new capture configuration, I
need to use two ports to scale past 32 processors. I think there are one
or more rules and traffic flows that are specific to the client network
that are causing the occasional drops. I see you mentioned having 6k rules
enabled and I imagine those are tuned to your client's network.
At this point I believe the high level performance issues are solved for
me, I've found a threshold and a way to scale that works for me and can
continue towards better results by isolating specific traffic flows and
More info that might help others:
PF_RING DNA clusters are limited to 32 application slaves per interface so
I need to use two clusters, thus the reason for using two ports with 22
slaves each. The limit of 32 looks like it is due to an unsigned short
being used for the mask in PF_RING; haven't looked at it much more than
that. DNA can work without using RSS, using a single queue (RSS=1,1).
When the driver is loaded I map the IRQ for each interface to processor
47. You could probably get similar results without using DNA mode but
using it has additional benefits.
Suricata is configured to use processors 46-47 for management and all other
cpu-affinity settings are configured for processors 0-45; I left two
processors spare for other things.
The system load is low; packets are only dropped on occasion on a few
thread/CPUs that hit 100% utilization; most of the CPUs have low
utilization even with 13500+ rules. Like I said above at this point I
think it is just a matter of isolating what rules and traffic flows are the
worst performing and then tuning/tweaking or shunting that traffic to
another path/cluster/box. In doing so I imagine I'll be able to gain more
performance out of a single box while still leaving enough headroom for
HTH and thanks for the help.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Oisf-users