[Oisf-users] Suricata on 8 cores, ~70K packets/sec

Chris Wakelin c.d.wakelin at reading.ac.uk
Tue Feb 15 19:21:42 UTC 2011


OK I've just run for an hour with

max-pending-packets: 5000
detect_thread_ratio: 0.25

I took out the emerging-web*.rules as they're not really relevant to the
students (and there's a lot of rules in them)

I didn't get any SC_WARN_FLOW_EMERGENCY despite running at over 100k
packets/sec (the students are at home now!)

> [8445] 15/2/2011 -- 17:29:39 - (stream-tcp.c:344) <Info> (StreamTcpInitConfig) -- stream "max_sessions": 262144
> [8445] 15/2/2011 -- 17:29:39 - (stream-tcp.c:356) <Info> (StreamTcpInitConfig) -- stream "prealloc_sessions": 32768
> [8445] 15/2/2011 -- 17:29:39 - (stream-tcp.c:366) <Info> (StreamTcpInitConfig) -- stream "memcap": 33554432
> [8445] 15/2/2011 -- 17:29:39 - (stream-tcp.c:373) <Info> (StreamTcpInitConfig) -- stream "midstream" session pickups: disabled
> [8445] 15/2/2011 -- 17:29:39 - (stream-tcp.c:381) <Info> (StreamTcpInitConfig) -- stream "async_oneside": disabled
> [8445] 15/2/2011 -- 17:29:39 - (stream-tcp.c:390) <Info> (StreamTcpInitConfig) -- stream.reassembly "memcap": 67108864
> [8445] 15/2/2011 -- 17:29:39 - (stream-tcp.c:410) <Info> (StreamTcpInitConfig) -- stream.reassembly "depth": 1048576
> [8445] 15/2/2011 -- 17:29:39 - (stream-tcp.c:421) <Info> (StreamTcpInitConfig) -- stream."inline": disabled
> [8446] 15/2/2011 -- 17:29:39 - (source-pfring.c:267) <Info> (ReceivePfringThreadInit) -- Using PF_RING v.4.5.0
> [8446] 15/2/2011 -- 17:29:39 - (source-pfring.c:275) <Info> (ReceivePfringThreadInit) -- pfring cluster type cluster_flow
> [8446] 15/2/2011 -- 17:29:39 - (source-pfring.c:290) <Info> (ReceivePfringThreadInit) -- pfring_set_cluster-id 99 set successfully
> [8445] 15/2/2011 -- 17:29:39 - (tm-threads.c:1487) <Info> (TmThreadWaitOnThreadInit) -- all 7 packet processing threads, 3 management threads initialized, engine started.
> [8445] 15/2/2011 -- 18:40:52 - (suricata.c:1258) <Info> (main) -- signal received
> [8446] 15/2/2011 -- 18:40:52 - (source-pfring.c:311) <Info> (ReceivePfringThreadExitStats) -- (ReceivePfring) Packets 443917764, bytes 350043144915
> [8445] 15/2/2011 -- 18:40:52 - (suricata.c:1288) <Info> (main) -- time elapsed 4273s
> [8446] 15/2/2011 -- 18:40:52 - (source-pfring.c:315) <Info> (ReceivePfringThreadExitStats) -- (ReceivePfring) Pfring Total:443917764 Recv:443917764 Drop:0 (0.0%).
> [8448] 15/2/2011 -- 18:40:52 - (stream-tcp.c:3465) <Info> (StreamTcpExitPrintStats) -- (Stream1) Packets 323477963
> [8452] 15/2/2011 -- 18:40:52 - (alert-fastlog.c:324) <Info> (AlertFastLogExitPrintStats) -- (Outputs) Alerts 449
> [8452] 15/2/2011 -- 18:40:52 - (alert-unified2-alert.c:603) <Info> (Unified2AlertThreadDeinit) -- Alert unified2 module wrote 449 alerts
> [8452] 15/2/2011 -- 18:40:52 - (log-httplog.c:404) <Info> (LogHttpLogExitPrintStats) -- (Outputs) HTTP requests 9993
> [8452] 15/2/2011 -- 18:40:52 - (log-droplog.c:389) <Info> (LogDropLogExitPrintStats) -- (Outputs) Dropped Packets 0
> [8453] 15/2/2011 -- 18:40:52 - (flow.c:1141) <Info> (FlowManagerThread) -- 1211205 new flows, 755102 established flows were timed out, 623507 flows in closed state
> [8445] 15/2/2011 -- 18:40:54 - (stream-tcp-reassemble.c:352) <Info> (StreamTcpReassembleFree) -- Max memuse of the stream reassembly engine 67108861 (in use 0)
> [8445] 15/2/2011 -- 18:40:54 - (stream-tcp.c:466) <Info> (StreamTcpFreeConfig) -- Max memuse of stream engine 33554304 (in use 0)
> [8445] 15/2/2011 -- 18:40:54 - (detect.c:3335) <Info> (SigAddressCleanupStage1) -- cleaning up signature grouping structure... complete

So it ran out of stream memory again. I've just quadrupled the memcaps,
and doubled the reassembly depth. I've also just changed the tcp flow
timeouts to 600 (emergency 120) rather than 3600 (300).

I did try set_cpu_affinity a while back (before Eric's patches), and it
didn't seem to make any difference.

I'll let you know how it gets on!

Best Wishes,
Chris

On 15/02/11 17:30, Victor Julien wrote:
> Hey Chris, thanks for your report. Comments inline.


> You might just try further increasing your flow settings. Also you could
> consider lowering the flow-timeouts.

>>> [7008] 15/2/2011 -- 08:29:36 - (stream-tcp-reassemble.c:352) <Info> (StreamTcpReassembleFree) -- Max memuse of the stream reassembly engine 67108863 (in use 0)
> 
> The stream reassembly code reached it's memcap here, please try setting
> it much higher.
> 
>>> [7008] 15/2/2011 -- 08:29:36 - (stream-tcp.c:466) <Info> (StreamTcpFreeConfig) -- Max memuse of stream engine 33554304 (in use 0)
> 
> Same here.

>> I'm not sure whether setting CPU affinity would help; the comment "On
>> Intel Core2 and Nehalem CPU's enabling this will degrade performance"
>> put me off, though in fact our CPUs are slightly older:

> I guess you can just try it. Or like Eric suggested, update to the
> latest git master and use his new way to control the threading in more
> detail...
> 
> Cheers,
> Victor
> 


-- 
--+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+-
Christopher Wakelin,                           c.d.wakelin at reading.ac.uk
IT Services Centre, The University of Reading,  Tel: +44 (0)118 378 8439
Whiteknights, Reading, RG6 6AF, UK              Fax: +44 (0)118 975 3094



More information about the Oisf-users mailing list