[Oisf-devel] imporve flow table in workers runmode

Vito Piserchia vpiserchia at gmail.com
Wed Apr 3 15:35:26 UTC 2013


Hi all,

I think it would be of interest in every situation Suricata takes
advantages from software as well as from hardware solutions in which some
kind of flow coalescing exists.
Also suricata would benefit of a "cache-friendly" packet dispatcher every
time a multicore environment is used.
I would suggest a super interesting project that tries to address this kind
of aspects: http://fireless.cs.cornell.edu/netslice

@chris just realized I've sent the mail just to you... sry for this

best regards
vito



On Wed, Apr 3, 2013 at 3:40 PM, Victor Julien <victor at inliniac.net> wrote:

> On 04/03/2013 10:59 AM, Chris Wakelin wrote:
> > On 03/04/13 09:19, Victor Julien wrote:
> >> > On 04/03/2013 02:31 AM, Song liu wrote:
> >>> >>
> >>> >> Right now, all workers will share one big flow table, and there
> will be
> >>> >> contention for it.
> >>> >> Supposed that the network interface is flow affinity, each worker
> will
> >>> >> handle individual flows.
> >>> >> In this way, I think it makes more sense that each worker has its
> own
> >>> >> flow table rather than one big table to reduce contention.
> >>> >>
> >> >
> >> > We've been discussing this before and I think it would make sense. It
> >> > does require quite a bit of refactoring though, especially since we'd
> >> > have to support the current setup as well for the non-workers
> runmodes.
> >> >
> > It sounds like a good idea when things like PF_RING are supposed to
> > handle the flow affinity onto virtual interfaces for us (PF_RING DNA +
> > libzero clusters do, and there's the PF_RING_DNA_SYMMETRIC_RSS flag for
> > PF_RING DNA without libzero and interfaces that support RSS).
>
> Actually, all workers implementations share the same assumption
> currently: flow based load balancing in pf_ring, af_packet, nfq, etc. So
> I think it makes sense to have a flow engine per worker in all these cases.
>
> Currently we have a single "FlowManager" thread, which acts as a garbage
> collector. It runs outside of the packet paths and deals with things
> like timeouts, etc. This will continue to lead to some contention. It's
> very non-greedy though, using trylock's it backs away as soon as locks
> are busy.
>
> We'll have to see if we'd want a FlowManager per flow engine, keep the
> current single one, or maybe something in between.
>
> --
> ---------------------------------------------
> Victor Julien
> http://www.inliniac.net/
> PGP: http://www.inliniac.net/victorjulien.asc
> ---------------------------------------------
>
> _______________________________________________
> Suricata IDS Devel mailing list: oisf-devel at openinfosecfoundation.org
> Site: http://suricata-ids.org | Participate:
> http://suricata-ids.org/participate/
> List: https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-devel
> Redmine: https://redmine.openinfosecfoundation.org/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-devel/attachments/20130403/be7c717d/attachment-0002.html>


More information about the Oisf-devel mailing list