<div dir="ltr">On Sun, Sep 1, 2013 at 1:26 AM, Peter Manev <span dir="ltr"><<a href="mailto:petermanev@gmail.com" target="_blank">petermanev@gmail.com</a>></span> wrote:<br><div class="gmail_extra"><div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5"><br>
</div></div>Can you try listening only on one ? see if it would make a difference?<br>
<br></blockquote><div><br></div><div>I already tried that while setting up this new capture configuration, I need to use two ports to scale past 32 processors. I think there are one or more rules and traffic flows that are specific to the client network that are causing the occasional drops. I see you mentioned having 6k rules enabled and I imagine those are tuned to your client's network.</div>
<div><br></div><div>At this point I believe the high level performance issues are solved for me, I've found a threshold and a way to scale that works for me and can continue towards better results by isolating specific traffic flows and associated rules.</div>
<div><br></div><div>More info that might help others:</div><div><br></div><div>PF_RING DNA clusters are limited to 32 application slaves per interface so I need to use two clusters, thus the reason for using two ports with 22 slaves each. The limit of 32 looks like it is due to an unsigned short being used for the mask in PF_RING; haven't looked at it much more than that. DNA can work without using RSS, using a single queue (RSS=1,1). When the driver is loaded I map the IRQ for each interface to processor 47. You could probably get similar results without using DNA mode but using it has additional benefits.</div>
<div><br></div><div>Suricata is configured to use processors 46-47 for management and all other cpu-affinity settings are configured for processors 0-45; I left two processors spare for other things.</div><div><br></div><div>
The system load is low; packets are only dropped on occasion on a few thread/CPUs that hit 100% utilization; most of the CPUs have low utilization even with 13500+ rules. Like I said above at this point I think it is just a matter of isolating what rules and traffic flows are the worst performing and then tuning/tweaking or shunting that traffic to another path/cluster/box. In doing so I imagine I'll be able to gain more performance out of a single box while still leaving enough headroom for unexpected behavior.</div>
<div><br></div><div>HTH and thanks for the help.</div><div><br></div><div>--TC</div><div><br></div></div></div></div>