<div dir="ltr"><div>Re: hardware queues. I know that is the case from testing as I was using 12 queues per port to distribute the four ports among 48 cores. See [1] for reference "16 hardware queues per port". Before you dismiss that as unofficial documentation those vendors typically cut/paste right from the vendor docs. I'm trying to find the specific quote from the Intel documentation but I cannot find it at the moment. As I understood it, that is the benefit to multi-port cards such as the Silicom 6-port 10G card, more hardware queues.</div>
<div><br></div><div>[1] <a href="http://www.avadirect.com/product_details_parts.asp?PRID=15550">http://www.avadirect.com/product_details_parts.asp?PRID=15550</a></div><div><br></div>Re: Interrupt balancing. Yep, that's how I achieved manual IRQ balancing prior to your suggestion to use irqbalance; by modifying set_irq_affinity.sh to adjust accordingly. See initial post and proc_interrupts.txt.<div>
<br></div><div>I think maybe the recent trend of recommending irqbalance over set_irq_balance was because of this possibly overlooked aspect; I went with your advice anyhow.<br><div class="gmail_extra"><br></div><div class="gmail_extra">
Re: FdirMode/FdirPbAlloc, see below and see [2].</div><div class="gmail_extra"><br></div><div class="gmail_extra">[2] <a href="http://www.ntop.org/products/pf_ring/hardware-packet-filtering/">http://www.ntop.org/products/pf_ring/hardware-packet-filtering/</a></div>
<div class="gmail_extra"><br></div><div class="gmail_extra"><br></div><div class="gmail_extra">Thanks for trying all but at this point I need time to pour over everything I've tried combined with everything I've learned in this thread.</div>
<div class="gmail_extra"><br></div><div class="gmail_extra">Regards,</div><div class="gmail_extra"><br></div><div class="gmail_extra">--TC</div><div class="gmail_extra"><br></div><div class="gmail_extra"><br></div><div class="gmail_extra">
<pre style="color:rgb(0,0,0);word-wrap:break-word;white-space:pre-wrap">#---------------------------------------------------------#</pre></div><div class="gmail_extra"><pre style="color:rgb(0,0,0);word-wrap:break-word;white-space:pre-wrap">
Intel(R) Ethernet Flow Director
-------------------------------
NOTE: Flow director parameters are only supported on kernel versions 2.6.30
or later.
Supports advanced filters that direct receive packets by their flows to
different queues. Enables tight control on routing a flow in the platform.
Matches flows and CPU cores for flow affinity. Supports multiple parameters
for flexible flow classification and load balancing.
Flow director is enabled only if the kernel is multiple TX queue capable.
An included script (set_irq_affinity) automates setting the IRQ to CPU
affinity.
You can verify that the driver is using Flow Director by looking at the counter
in ethtool: fdir_miss and fdir_match.
Other ethtool Commands:
To enable Flow Director
ethtool -K ethX ntuple on
To add a filter
Use -U switch. e.g., ethtool -U ethX flow-type tcp4 src-ip
192.168.0.100 action 1
To see the list of filters currently present:
ethtool -u ethX
The following two parameters impact Flow Director.
FdirPballoc
-----------
Valid Range: 1-3 (1=64k, 2=128k, 3=256k)
Default Value: 1
Flow Director allocated packet buffer size.</pre><pre style="color:rgb(0,0,0);word-wrap:break-word;white-space:pre-wrap">#---------------------------------------------------------#</pre><pre style="color:rgb(0,0,0);word-wrap:break-word;white-space:pre-wrap">
<br></pre></div><div class="gmail_extra"><br></div></div></div>