[Oisf-users] Tuning Suricata (2.0beta1) -- no rules and lots of packet loss

vpiserchia at gmail.com vpiserchia at gmail.com
Wed Aug 21 19:16:09 UTC 2013


On 08/21/2013 08:51 PM, Tritium Cat wrote:
> That was the case before set_irq_balance was run; by running show_proc_interrupts.sh (grep and sed to make it more readable) you can see the interrupts incrementing across the diagonal as they should.  (Note the larger numbers and disjointed diagonal line)
> 
> In theory irqbalance should work fine and set_irq_balance is only needed if one wanted to bind all packet processing of a flow from a queue to a dedicated core, as mentioned somewhere in the available Suricata 10G strategies; that's another area I'm not sure I setup correctly but I do not believe it is the cause of the performance issues.
>  

In all my experiments, especially if you run suricata in worker mode, a correct IRQ/thread affinity setting is the way to go to reduce packet drops;
pls also note that when the same flow goes to the same CPU, cache hits come into play and this really makes the difference

BTW: with 4 ports maybe you can also reduce the RSS queue number (up to 12) to match your available CPUs (just an idea)


> My feeling at the moment is the MTU setting combined with filtering of regular high volume backup flows will help most here; I'll know for sure later on.

Are you still using this setting?

  ...
    defrag: yes
  ...
  checksum-validation: yes      # reject wrong csums

Maybe you can try switching them "off"

just hope this help ;)

regards



> 
> Tounge-in-cheek:  Could use a vendor "bake off" to have them solve the problem during the "free" beta test.  About your blog post...
> 
> --TC
>  
> 
> 
> On Wed, Aug 21, 2013 at 11:25 AM, vpiserchia at gmail.com <mailto:vpiserchia at gmail.com> <vpiserchia at gmail.com <mailto:vpiserchia at gmail.com>> wrote:
> 
>     Dear TC,
> 
>     looking at the proc_interrupts.txt file I see something strange: interrupts dont't seem to be correctly balanced, and more than one core are bounded to multiple interrupt from different (in the worst case: up to 3) rx/tx queues
> 
>     regards
>     -vito
> 
> 
> 
>     On 08/21/2013 08:11 PM, Tritium Cat wrote:
>     > Re: hardware queues.  I know that is the case from testing as I was using 12 queues per port to distribute the four ports among 48 cores.  See [1] for reference "16 hardware queues per port".  Before you dismiss that as unofficial documentation those vendors typically cut/paste right from the vendor docs.  I'm trying to find the specific quote from the Intel documentation but I cannot find it at the moment.  As I understood it, that is the benefit to multi-port cards such as the Silicom 6-port 10G card, more hardware queues.
>     >
>     > [1] http://www.avadirect.com/product_details_parts.asp?PRID=15550
>     >
>     > Re: Interrupt balancing.  Yep, that's how I achieved manual IRQ balancing prior to your suggestion to use irqbalance; by modifying set_irq_affinity.sh to adjust accordingly.  See initial post and proc_interrupts.txt.
>     >
>     > I think maybe the recent trend of recommending irqbalance over set_irq_balance was because of this possibly overlooked aspect; I went with your advice anyhow.
>     >
>     > Re: FdirMode/FdirPbAlloc, see below and see [2].
>     >
>     > [2] http://www.ntop.org/products/pf_ring/hardware-packet-filtering/
>     >
>     >
>     > Thanks for trying all but at this point I need time to pour over everything I've tried combined with everything I've learned in this thread.
>     >
>     > Regards,
>     >
>     > --TC
>     >
>     >
>     > #---------------------------------------------------------#
>     >
>     > Intel(R) Ethernet Flow Director
>     > -------------------------------
>     > NOTE: Flow director parameters are only supported on kernel versions 2.6.30
>     > or later.
>     >
>     > Supports advanced filters that direct receive packets by their flows to
>     > different queues. Enables tight control on routing a flow in the platform.
>     > Matches flows and CPU cores for flow affinity. Supports multiple parameters
>     > for flexible flow classification and load balancing.
>     >
>     > Flow director is enabled only if the kernel is multiple TX queue capable.
>     >
>     > An included script (set_irq_affinity) automates setting the IRQ to CPU
>     > affinity.
>     >
>     > You can verify that the driver is using Flow Director by looking at the counter
>     > in ethtool: fdir_miss and fdir_match.
>     >
>     > Other ethtool Commands:
>     > To enable Flow Director
>     >       ethtool -K ethX ntuple on
>     > To add a filter
>     >       Use -U switch. e.g., ethtool -U ethX flow-type tcp4 src-ip
>     >         192.168.0.100 action 1
>     > To see the list of filters currently present:
>     >       ethtool -u ethX
>     >
>     > The following two parameters impact Flow Director.
>     >
>     > FdirPballoc
>     > -----------
>     > Valid Range: 1-3 (1=64k, 2=128k, 3=256k)
>     > Default Value: 1
>     >
>     >   Flow Director allocated packet buffer size.
>     >
>     > #---------------------------------------------------------#
>     >
>     >
>     >
>     >
>     >
>     > _______________________________________________
>     > Suricata IDS Users mailing list: oisf-users at openinfosecfoundation.org <mailto:oisf-users at openinfosecfoundation.org>
>     > Site: http://suricata-ids.org | Support: http://suricata-ids.org/support/
>     > List: https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>     > OISF: http://www.openinfosecfoundation.org/
>     >
> 
> 




More information about the Oisf-users mailing list