[Oisf-users] Workers vs AutoFP with more than 32 cores

Cooper F. Nelson cnelson at ucsd.edu
Thu Aug 29 22:04:54 UTC 2013

Hash: SHA1

On 8/29/2013 12:57 PM, Tritium Cat wrote:
> For this setup it seems ~300,000 pps is about the limit before performance
> degrades.  (For those claiming 10g on a single box with less cores/memory
> and less rules... what are your packets per second ?)

Around 300k-400k pps peak.

> The hardware configuration is a dual-port Intel x520-DA2 card with both
> ports receiving a balanced share of traffic.  Each port is setup to use a
> DNA Libzero cluster with 22 threads.  (DNA libzero has a limit of 32
> threads per cluster and is the reason I used two ports and two clusters.)
>  The server is very similar to the one discussed in the Clarkson.edu paper
> below and I used a number of their suggestions such as 65534
> max-pending-packets.

Max-pending-packets isn't relevant when using 'worker' mode, as only one
packet is processed at a time, per thread.

> Workers runmode is the only configuration that works for me.  I tried to
> get AutoFP working while honoring cache coherency (afaiu..) but no matter
> the configuration it always failed to process anything more than 200-300
> Mbps and dropped 85%+ packets on each capture thread.

Yeah, it seems autofp only works for <1Gbs traffic.

> I guess that's the tradeoff with workers runmode, only a single detect
> module is available and it can starve out the other modules on that thread.
>  ( If I'm totally wrong on some of this please point it out ).  I think the
> drops are from one or more of the signatures consuming the time spent in
> the detect module but I cannot tell since they are all in the same thread.
>  Some workers see ~35-45k pps with no drops while others may drop packets
> while processing under 10k pps.

No, you are correct.  There is an inherent limitation in worker mode in
that each thread can only process so many pps.  Suricata is basically
running multiple, single-threaded clones of itself.

> The memory usage is low but I think there is a leak somewhere and is maybe
> related to the higher thread count.  After ~12 hours of runtime the server
> finally used all memory and crashed.  (i.e. if you're not using many many
> threads maybe you'll not trigger this condition)

I don't think it leaks memory.  I think there is a known issue where
some of the data structures slowly grow over time.  I just restart it

- -- 
Cooper Nelson
Network Security Analyst
UCSD ACT Security Team
cnelson at ucsd.edu x41042
Version: GnuPG v2.0.17 (MingW32)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/


More information about the Oisf-users mailing list