<div dir="ltr">That was the case before set_irq_balance was run; by running show_proc_interrupts.sh (grep and sed to make it more readable) you can see the interrupts incrementing across the diagonal as they should. (Note the larger numbers and disjointed diagonal line)<div>
<br></div><div>In theory irqbalance should work fine and set_irq_balance is only needed if one wanted to bind all packet processing of a flow from a queue to a dedicated core, as mentioned somewhere in the available Suricata 10G strategies; that's another area I'm not sure I setup correctly but I do not believe it is the cause of the performance issues.</div>
<div> <br></div><div>My feeling at the moment is the MTU setting combined with filtering of regular high volume backup flows will help most here; I'll know for sure later on.</div><div><br></div><div>Tounge-in-cheek: Could use a vendor "bake off" to have them solve the problem during the "free" beta test. About your blog post...</div>
<div><br></div><div>--TC</div><div> </div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Aug 21, 2013 at 11:25 AM, <a href="mailto:vpiserchia@gmail.com">vpiserchia@gmail.com</a> <span dir="ltr"><<a href="mailto:vpiserchia@gmail.com" target="_blank">vpiserchia@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Dear TC,<br>
<br>
looking at the proc_interrupts.txt file I see something strange: interrupts dont't seem to be correctly balanced, and more than one core are bounded to multiple interrupt from different (in the worst case: up to 3) rx/tx queues<br>
<br>
regards<br>
-vito<br>
<div class="HOEnZb"><div class="h5"><br>
<br>
<br>
On 08/21/2013 08:11 PM, Tritium Cat wrote:<br>
> Re: hardware queues. I know that is the case from testing as I was using 12 queues per port to distribute the four ports among 48 cores. See [1] for reference "16 hardware queues per port". Before you dismiss that as unofficial documentation those vendors typically cut/paste right from the vendor docs. I'm trying to find the specific quote from the Intel documentation but I cannot find it at the moment. As I understood it, that is the benefit to multi-port cards such as the Silicom 6-port 10G card, more hardware queues.<br>
><br>
> [1] <a href="http://www.avadirect.com/product_details_parts.asp?PRID=15550" target="_blank">http://www.avadirect.com/product_details_parts.asp?PRID=15550</a><br>
><br>
> Re: Interrupt balancing. Yep, that's how I achieved manual IRQ balancing prior to your suggestion to use irqbalance; by modifying set_irq_affinity.sh to adjust accordingly. See initial post and proc_interrupts.txt.<br>
><br>
> I think maybe the recent trend of recommending irqbalance over set_irq_balance was because of this possibly overlooked aspect; I went with your advice anyhow.<br>
><br>
> Re: FdirMode/FdirPbAlloc, see below and see [2].<br>
><br>
> [2] <a href="http://www.ntop.org/products/pf_ring/hardware-packet-filtering/" target="_blank">http://www.ntop.org/products/pf_ring/hardware-packet-filtering/</a><br>
><br>
><br>
> Thanks for trying all but at this point I need time to pour over everything I've tried combined with everything I've learned in this thread.<br>
><br>
> Regards,<br>
><br>
> --TC<br>
><br>
><br>
> #---------------------------------------------------------#<br>
><br>
> Intel(R) Ethernet Flow Director<br>
> -------------------------------<br>
> NOTE: Flow director parameters are only supported on kernel versions 2.6.30<br>
> or later.<br>
><br>
> Supports advanced filters that direct receive packets by their flows to<br>
> different queues. Enables tight control on routing a flow in the platform.<br>
> Matches flows and CPU cores for flow affinity. Supports multiple parameters<br>
> for flexible flow classification and load balancing.<br>
><br>
> Flow director is enabled only if the kernel is multiple TX queue capable.<br>
><br>
> An included script (set_irq_affinity) automates setting the IRQ to CPU<br>
> affinity.<br>
><br>
> You can verify that the driver is using Flow Director by looking at the counter<br>
> in ethtool: fdir_miss and fdir_match.<br>
><br>
> Other ethtool Commands:<br>
> To enable Flow Director<br>
> ethtool -K ethX ntuple on<br>
> To add a filter<br>
> Use -U switch. e.g., ethtool -U ethX flow-type tcp4 src-ip<br>
> 192.168.0.100 action 1<br>
> To see the list of filters currently present:<br>
> ethtool -u ethX<br>
><br>
> The following two parameters impact Flow Director.<br>
><br>
> FdirPballoc<br>
> -----------<br>
> Valid Range: 1-3 (1=64k, 2=128k, 3=256k)<br>
> Default Value: 1<br>
><br>
> Flow Director allocated packet buffer size.<br>
><br>
> #---------------------------------------------------------#<br>
><br>
><br>
><br>
><br>
><br>
</div></div><div class="HOEnZb"><div class="h5">> _______________________________________________<br>
> Suricata IDS Users mailing list: <a href="mailto:oisf-users@openinfosecfoundation.org">oisf-users@openinfosecfoundation.org</a><br>
> Site: <a href="http://suricata-ids.org" target="_blank">http://suricata-ids.org</a> | Support: <a href="http://suricata-ids.org/support/" target="_blank">http://suricata-ids.org/support/</a><br>
> List: <a href="https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users" target="_blank">https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>
> OISF: <a href="http://www.openinfosecfoundation.org/" target="_blank">http://www.openinfosecfoundation.org/</a><br>
><br>
<br>
</div></div></blockquote></div><br></div>