[Oisf-users] Tuning Suricata (2.0beta1) -- no rules and lots of packet loss

Tritium Cat tritium.cat at gmail.com
Wed Aug 21 18:51:43 UTC 2013


That was the case before set_irq_balance was run; by running
show_proc_interrupts.sh (grep and sed to make it more readable) you can see
the interrupts incrementing across the diagonal as they should.  (Note the
larger numbers and disjointed diagonal line)

In theory irqbalance should work fine and set_irq_balance is only needed if
one wanted to bind all packet processing of a flow from a queue to a
dedicated core, as mentioned somewhere in the available Suricata 10G
strategies; that's another area I'm not sure I setup correctly but I do not
believe it is the cause of the performance issues.

My feeling at the moment is the MTU setting combined with filtering of
regular high volume backup flows will help most here; I'll know for sure
later on.

Tounge-in-cheek:  Could use a vendor "bake off" to have them solve the
problem during the "free" beta test.  About your blog post...

--TC



On Wed, Aug 21, 2013 at 11:25 AM, vpiserchia at gmail.com <vpiserchia at gmail.com
> wrote:

> Dear TC,
>
> looking at the proc_interrupts.txt file I see something strange:
> interrupts dont't seem to be correctly balanced, and more than one core are
> bounded to multiple interrupt from different (in the worst case: up to 3)
> rx/tx queues
>
> regards
> -vito
>
>
>
> On 08/21/2013 08:11 PM, Tritium Cat wrote:
> > Re: hardware queues.  I know that is the case from testing as I was
> using 12 queues per port to distribute the four ports among 48 cores.  See
> [1] for reference "16 hardware queues per port".  Before you dismiss that
> as unofficial documentation those vendors typically cut/paste right from
> the vendor docs.  I'm trying to find the specific quote from the Intel
> documentation but I cannot find it at the moment.  As I understood it, that
> is the benefit to multi-port cards such as the Silicom 6-port 10G card,
> more hardware queues.
> >
> > [1] http://www.avadirect.com/product_details_parts.asp?PRID=15550
> >
> > Re: Interrupt balancing.  Yep, that's how I achieved manual IRQ
> balancing prior to your suggestion to use irqbalance; by modifying
> set_irq_affinity.sh to adjust accordingly.  See initial post and
> proc_interrupts.txt.
> >
> > I think maybe the recent trend of recommending irqbalance over
> set_irq_balance was because of this possibly overlooked aspect; I went with
> your advice anyhow.
> >
> > Re: FdirMode/FdirPbAlloc, see below and see [2].
> >
> > [2] http://www.ntop.org/products/pf_ring/hardware-packet-filtering/
> >
> >
> > Thanks for trying all but at this point I need time to pour over
> everything I've tried combined with everything I've learned in this thread.
> >
> > Regards,
> >
> > --TC
> >
> >
> > #---------------------------------------------------------#
> >
> > Intel(R) Ethernet Flow Director
> > -------------------------------
> > NOTE: Flow director parameters are only supported on kernel versions
> 2.6.30
> > or later.
> >
> > Supports advanced filters that direct receive packets by their flows to
> > different queues. Enables tight control on routing a flow in the
> platform.
> > Matches flows and CPU cores for flow affinity. Supports multiple
> parameters
> > for flexible flow classification and load balancing.
> >
> > Flow director is enabled only if the kernel is multiple TX queue capable.
> >
> > An included script (set_irq_affinity) automates setting the IRQ to CPU
> > affinity.
> >
> > You can verify that the driver is using Flow Director by looking at the
> counter
> > in ethtool: fdir_miss and fdir_match.
> >
> > Other ethtool Commands:
> > To enable Flow Director
> >       ethtool -K ethX ntuple on
> > To add a filter
> >       Use -U switch. e.g., ethtool -U ethX flow-type tcp4 src-ip
> >         192.168.0.100 action 1
> > To see the list of filters currently present:
> >       ethtool -u ethX
> >
> > The following two parameters impact Flow Director.
> >
> > FdirPballoc
> > -----------
> > Valid Range: 1-3 (1=64k, 2=128k, 3=256k)
> > Default Value: 1
> >
> >   Flow Director allocated packet buffer size.
> >
> > #---------------------------------------------------------#
> >
> >
> >
> >
> >
> > _______________________________________________
> > Suricata IDS Users mailing list: oisf-users at openinfosecfoundation.org
> > Site: http://suricata-ids.org | Support:
> http://suricata-ids.org/support/
> > List:
> https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
> > OISF: http://www.openinfosecfoundation.org/
> >
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20130821/3fa947e7/attachment-0002.html>


More information about the Oisf-users mailing list