[Oisf-devel] imporve flow table in workers runmode
Victor Julien
victor at inliniac.net
Fri Apr 19 09:02:38 UTC 2013
On 04/17/2013 04:35 PM, vpiserchia at gmail.com wrote:
>>> >>> On Wed, 2013-04-03 at 15:40 +0200, Victor Julien wrote:
>>> >> >>>> On 04/03/2013 10:59 AM, Chris Wakelin wrote:
>>> >> >>>>> On 03/04/13 09:19, Victor Julien wrote:
>>> >> >>>>>>> On 04/03/2013 02:31 AM, Song liu wrote:
>>> >> >>>>>>>>>
>>> >> >>>>>>>>> Right now, all workers will share one big flow table, and
>>> >> there will be
>>> >> >>>>>>>>> contention for it.
>>> >> >>>>>>>>> Supposed that the network interface is flow affinity, each
>>> >> worker will
>>> >> >>>>>>>>> handle individual flows.
>>> >> >>>>>>>>> In this way, I think it makes more sense that each worker
>>> >> has its own
>>> >> >>>>>>>>> flow table rather than one big table to reduce contention.
>>> >> >>>>>>>>>
>>> >> >>>>>>>
>>> >> >>>>>>> We've been discussing this before and I think it would make
>>> >> sense. It
>>> >> >>>>>>> does require quite a bit of refactoring though, especially
>>> >> since we'd
>>> >> >>>>>>> have to support the current setup as well for the
>>> >> non-workers runmodes.
>>> >> >>>>>>>
> An other approach could be having something like a partitioned hash
> table with a fine-grained lock strategy.
We could do this, however I'd like to see some numbers first. I suspect
contention isn't really problematic in the flow engine right now.
> Even stronger could be using a lock-free data structures, does anyone
> have experience in this topic?
Some, and it's a really hard topic. Still interesting of course.
> The last alternative idea could be the usage of the userspace RCU
> implementation [1]
Also interesting, although last time I checked it used mutex' internally
for some book keeping. Still contention of those mutex' should be quite
a bit lower than in our case. Definitely worth looking into further.
--
---------------------------------------------
Victor Julien
http://www.inliniac.net/
PGP: http://www.inliniac.net/victorjulien.asc
---------------------------------------------
More information about the Oisf-devel
mailing list