[Oisf-devel] imporve flow table in workers runmode

Victor Julien victor at inliniac.net
Wed Apr 3 13:40:27 UTC 2013


On 04/03/2013 10:59 AM, Chris Wakelin wrote:
> On 03/04/13 09:19, Victor Julien wrote:
>> > On 04/03/2013 02:31 AM, Song liu wrote:
>>> >>
>>> >> Right now, all workers will share one big flow table, and there will be
>>> >> contention for it.
>>> >> Supposed that the network interface is flow affinity, each worker will
>>> >> handle individual flows.
>>> >> In this way, I think it makes more sense that each worker has its own
>>> >> flow table rather than one big table to reduce contention.
>>> >>
>> > 
>> > We've been discussing this before and I think it would make sense. It
>> > does require quite a bit of refactoring though, especially since we'd
>> > have to support the current setup as well for the non-workers runmodes.
>> > 
> It sounds like a good idea when things like PF_RING are supposed to
> handle the flow affinity onto virtual interfaces for us (PF_RING DNA +
> libzero clusters do, and there's the PF_RING_DNA_SYMMETRIC_RSS flag for
> PF_RING DNA without libzero and interfaces that support RSS).

Actually, all workers implementations share the same assumption
currently: flow based load balancing in pf_ring, af_packet, nfq, etc. So
I think it makes sense to have a flow engine per worker in all these cases.

Currently we have a single "FlowManager" thread, which acts as a garbage
collector. It runs outside of the packet paths and deals with things
like timeouts, etc. This will continue to lead to some contention. It's
very non-greedy though, using trylock's it backs away as soon as locks
are busy.

We'll have to see if we'd want a FlowManager per flow engine, keep the
current single one, or maybe something in between.

-- 
---------------------------------------------
Victor Julien
http://www.inliniac.net/
PGP: http://www.inliniac.net/victorjulien.asc
---------------------------------------------




More information about the Oisf-devel mailing list