[Oisf-users] Tuning Suricata (2.0beta1) -- no rules and lots of packet loss

vpiserchia at gmail.com vpiserchia at gmail.com
Wed Aug 21 18:25:21 UTC 2013


Dear TC,

looking at the proc_interrupts.txt file I see something strange: interrupts dont't seem to be correctly balanced, and more than one core are bounded to multiple interrupt from different (in the worst case: up to 3) rx/tx queues

regards
-vito



On 08/21/2013 08:11 PM, Tritium Cat wrote:
> Re: hardware queues.  I know that is the case from testing as I was using 12 queues per port to distribute the four ports among 48 cores.  See [1] for reference "16 hardware queues per port".  Before you dismiss that as unofficial documentation those vendors typically cut/paste right from the vendor docs.  I'm trying to find the specific quote from the Intel documentation but I cannot find it at the moment.  As I understood it, that is the benefit to multi-port cards such as the Silicom 6-port 10G card, more hardware queues.
> 
> [1] http://www.avadirect.com/product_details_parts.asp?PRID=15550
> 
> Re: Interrupt balancing.  Yep, that's how I achieved manual IRQ balancing prior to your suggestion to use irqbalance; by modifying set_irq_affinity.sh to adjust accordingly.  See initial post and proc_interrupts.txt.
> 
> I think maybe the recent trend of recommending irqbalance over set_irq_balance was because of this possibly overlooked aspect; I went with your advice anyhow.
> 
> Re: FdirMode/FdirPbAlloc, see below and see [2].
> 
> [2] http://www.ntop.org/products/pf_ring/hardware-packet-filtering/
> 
> 
> Thanks for trying all but at this point I need time to pour over everything I've tried combined with everything I've learned in this thread.
> 
> Regards,
> 
> --TC
> 
> 
> #---------------------------------------------------------#
> 
> Intel(R) Ethernet Flow Director
> -------------------------------
> NOTE: Flow director parameters are only supported on kernel versions 2.6.30 
> or later.
> 
> Supports advanced filters that direct receive packets by their flows to 
> different queues. Enables tight control on routing a flow in the platform. 
> Matches flows and CPU cores for flow affinity. Supports multiple parameters 
> for flexible flow classification and load balancing. 
> 
> Flow director is enabled only if the kernel is multiple TX queue capable.
> 
> An included script (set_irq_affinity) automates setting the IRQ to CPU 
> affinity.
> 
> You can verify that the driver is using Flow Director by looking at the counter
> in ethtool: fdir_miss and fdir_match.
> 
> Other ethtool Commands:
> To enable Flow Director
> 	ethtool -K ethX ntuple on
> To add a filter
> 	Use -U switch. e.g., ethtool -U ethX flow-type tcp4 src-ip 
>         192.168.0.100 action 1
> To see the list of filters currently present:
> 	ethtool -u ethX
> 
> The following two parameters impact Flow Director.
> 
> FdirPballoc
> -----------
> Valid Range: 1-3 (1=64k, 2=128k, 3=256k)
> Default Value: 1
> 
>   Flow Director allocated packet buffer size.
> 
> #---------------------------------------------------------#
> 
> 
> 
> 
> 
> _______________________________________________
> Suricata IDS Users mailing list: oisf-users at openinfosecfoundation.org
> Site: http://suricata-ids.org | Support: http://suricata-ids.org/support/
> List: https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
> OISF: http://www.openinfosecfoundation.org/
> 




More information about the Oisf-users mailing list