[Oisf-users] Suricata threading

Shirkdog shirkdog at gmail.com
Thu Aug 14 14:54:12 UTC 2014


Sorry, Eric Leblond's guide. :)

---
Michael Shirk


On Thu, Aug 14, 2014 at 10:37 AM, Peter Manev <petermanev at gmail.com> wrote:
> On Thu, Aug 14, 2014 at 4:32 PM, Shirkdog <shirkdog at gmail.com> wrote:
>> I would also add Peter's guide for configuring Suricata for high performance:
>>
>> https://home.regit.org/2012/07/suricata-to-10gbps-and-beyond/
>
> Eric's :)
>
>>
>> ---
>> Michael Shirk
>>
>>
>> On Thu, Aug 14, 2014 at 7:26 AM, Peter Manev <petermanev at gmail.com> wrote:
>>> On Thu, Aug 14, 2014 at 12:26 PM, Russell Fulton
>>> <r.fulton at auckland.ac.nz> wrote:
>>>> Thanks Duarte and Coop!
>>>>
>>>> On 14/08/2014, at 7:11 pm, Duarte Silva <duarte.silva at serializing.me> wrote:
>>>>
>>>> Hi,
>>>>
>>>> in your configuration you should enable affinity :P
>>>>
>>>> #
>>>> # On Intel Core2 and Nehalem CPU's enabling this will degrade performance.
>>>> #
>>>> set-cpu-affinity: no
>>>>
>>>>
>>>> Change this to yes, otherwise any settings bellow will be ignored.
>>>>
>>>>
>>>> I fixed that but the behaviour has not changed much  it is still hogging one
>>>> CPU.
>>>>
>>>> Looking at the startup logs I see:
>>>>
>>>> Aug 14 22:15:08 secmonprd02 suricata: 14/8/2014 -- 22:15:08 - <Info> - Core
>>>> dump size set to unlimited.
>>>> Aug 14 22:15:08 secmonprd02 suricata: 14/8/2014 -- 22:15:08 - <Info> -
>>>> dropped the caps for main thread
>>>> Aug 14 22:15:08 secmonprd02 suricata: 14/8/2014 -- 22:15:08 - <Info> - fast
>>>> output device (regular) initialized: fast.log
>>>> Aug 14 22:15:08 secmonprd02 suricata: 14/8/2014 -- 22:15:08 - <Info> -
>>>> Unified2-alert initialized: filename unified2.alert, limit 32 MB
>>>> Aug 14 22:15:08 secmonprd02 suricata: 14/8/2014 -- 22:15:08 - <Info> -
>>>> Adding interface eth3 from config file
>>>> Aug 14 22:15:08 secmonprd02 suricata: 14/8/2014 -- 22:15:08 - <Info> - Found
>>>> affinity definition for "management-cpu-set"
>>>> Aug 14 22:15:08 secmonprd02 suricata: 14/8/2014 -- 22:15:08 - <Info> - Found
>>>> affinity definition for "receive-cpu-set"
>>>> Aug 14 22:15:08 secmonprd02 suricata: 14/8/2014 -- 22:15:08 - <Info> - Found
>>>> affinity definition for "decode-cpu-set"
>>>> Aug 14 22:15:08 secmonprd02 suricata: 14/8/2014 -- 22:15:08 - <Info> - Found
>>>> affinity definition for "stream-cpu-set"
>>>> Aug 14 22:15:08 secmonprd02 suricata: 14/8/2014 -- 22:15:08 - <Info> - Found
>>>> affinity definition for "detect-cpu-set"
>>>> Aug 14 22:15:08 secmonprd02 suricata: 14/8/2014 -- 22:15:08 - <Info> - Using
>>>> default prio 'medium'
>>>> Aug 14 22:15:08 secmonprd02 suricata: 14/8/2014 -- 22:15:08 - <Info> - Found
>>>> affinity definition for "verdict-cpu-set"
>>>> Aug 14 22:15:08 secmonprd02 suricata: 14/8/2014 -- 22:15:08 - <Info> - Using
>>>> default prio 'high'
>>>> Aug 14 22:15:08 secmonprd02 suricata: 14/8/2014 -- 22:15:08 - <Info> - Found
>>>> affinity definition for "reject-cpu-set"
>>>> Aug 14 22:15:08 secmonprd02 suricata: 14/8/2014 -- 22:15:08 - <Info> - Using
>>>> default prio 'low'
>>>> Aug 14 22:15:08 secmonprd02 suricata: 14/8/2014 -- 22:15:08 - <Info> - Found
>>>> affinity definition for "output-cpu-set"
>>>> Aug 14 22:15:08 secmonprd02 suricata: 14/8/2014 -- 22:15:08 - <Info> - Using
>>>> default prio 'medium'
>>>> Aug 14 22:15:08 secmonprd02 suricata: 14/8/2014 -- 22:15:08 - <Info> - Using
>>>> flow cluster mode for PF_RING (iface eth3)
>>>> Aug 14 22:15:08 secmonprd02 suricata: 14/8/2014 -- 22:15:08 - <Info> - Going
>>>> to use 1 thread(s)
>>>> Aug 14 22:15:08 secmonprd02 suricata: 14/8/2014 -- 22:15:08 - <Info> -
>>>> Setting affinity on CPU 13
>>>> Aug 14 22:15:08 secmonprd02 suricata: 14/8/2014 -- 22:15:08 - <Info> -
>>>> Setting prio -2 for "RxPFReth31" Module to cpu/core 13, thread id 9432
>>>> Aug 14 22:15:08 secmonprd02 suricata: 14/8/2014 -- 22:15:08 - <Error> -
>>>> [ERRCODE: SC_ERR_THREAD_NICE_PRIO(47)] - Error setting nice value for thread
>>>> RxPFReth31: Operation not permitted
>>>> Aug 14 22:15:08 secmonprd02 suricata: 14/8/2014 -- 22:15:08 - <Info> -
>>>> (RxPFReth31) Using PF_RING v.5.6.1, interface eth3, cluster-id 99,
>>>> single-pfring-thread
>>>> Aug 14 22:15:08 secmonprd02 suricata: 14/8/2014 -- 22:15:08 - <Info> -
>>>> RunModeIdsPfringWorkers initialised
>>>> Aug 14 22:15:09 secmonprd02 suricata: 14/8/2014 -- 22:15:08 - <Info> -
>>>> Setting prio 0 for "FlowManagerThread" thread , thread id 9433
>>>> Aug 14 22:15:09 secmonprd02 suricata: 14/8/2014 -- 22:15:08 - <Info> -
>>>> stream "max-sessions": 262144
>>>> Aug 14 22:15:09 secmonprd02 suricata: 14/8/2014 -- 22:15:08 - <Info> -
>>>> stream "prealloc-sessions": 32768
>>>> Aug 14 22:15:09 secmonprd02 suricata: 14/8/2014 -- 22:15:08 - <Info> -
>>>> stream "memcap": 33554432
>>>> Aug 14 22:15:09 secmonprd02 suricata: 14/8/2014 -- 22:15:08 - <Info> -
>>>> stream "midstream" session pickups: disabled
>>>> Aug 14 22:15:09 secmonprd02 suricata: 14/8/2014 -- 22:15:08 - <Info> -
>>>> stream "async-oneside": disabled
>>>> Aug 14 22:15:09 secmonprd02 suricata: 14/8/2014 -- 22:15:08 - <Info> -
>>>> stream "checksum-validation": enabled
>>>> Aug 14 22:15:09 secmonprd02 suricata: 14/8/2014 -- 22:15:08 - <Info> -
>>>> stream."inline": disabled
>>>> Aug 14 22:15:09 secmonprd02 suricata: 14/8/2014 -- 22:15:08 - <Info> -
>>>> stream.reassembly "memcap": 67108864
>>>> Aug 14 22:15:09 secmonprd02 suricata: 14/8/2014 -- 22:15:08 - <Info> -
>>>> stream.reassembly "depth": 1048576
>>>> Aug 14 22:15:09 secmonprd02 suricata: 14/8/2014 -- 22:15:08 - <Info> -
>>>> stream.reassembly "toserver-chunk-size": 2560
>>>> Aug 14 22:15:09 secmonprd02 suricata: 14/8/2014 -- 22:15:08 - <Info> -
>>>> stream.reassembly "toclient-chunk-size": 2560
>>>> Aug 14 22:15:09 secmonprd02 suricata: 14/8/2014 -- 22:15:09 - <Info> -
>>>> Setting prio 0 for "SCPerfWakeupThread" thread , thread id 9434
>>>> Aug 14 22:15:09 secmonprd02 suricata: 14/8/2014 -- 22:15:09 - <Info> -
>>>> Setting prio 0 for "SCPerfMgmtThread" thread , thread id 9435
>>>> Aug 14 22:15:09 secmonprd02 suricata: 14/8/2014 -- 22:15:09 - <Info> - all 1
>>>> packet processing threads, 3 management threads initialized, engine started.
>>>>
>>>> I get affinity set for just cpu 13.
>>>>
>>>> I am guessing the nice fails because I have dropped prigs.
>>>>
>>>> here is the current config:
>>>>
>>>> # Tune cpu affinity of suricata threads. Each family of threads can be bound
>>>>   # on specific CPUs.
>>>>   cpu-affinity:
>>>>     - management-cpu-set:
>>>>         cpu: [ 10 ]  # include only these cpus in affinity settings
>>>>     - receive-cpu-set:
>>>>         cpu: [ 10 ]  # include only these cpus in affinity settings
>>>>     - decode-cpu-set:
>>>>         cpu: [ 10, 11 ]
>>>>         mode: "balanced"
>>>>     - stream-cpu-set:
>>>>         cpu: [ "10-11" ]
>>>>     - detect-cpu-set:
>>>>         cpu: [ "13-15" ]
>>>>         mode: "exclusive" # run detect threads in these cpus
>>>>         # Use explicitely 3 threads and don't compute number by using
>>>>         # detect-thread-ratio variable:
>>>>         threads: 3
>>>>         prio:
>>>>           low: [ 10 ]
>>>>           medium: [ "11-12" ]
>>>>           high: [ 13 ]
>>>>           default: "medium"
>>>>     - verdict-cpu-set:
>>>>         cpu: [ 10 ]
>>>>         prio:
>>>>           default: "high"
>>>>     - reject-cpu-set:
>>>>         cpu: [ 10 ]
>>>>         prio:
>>>>           default: "low"
>>>>     - output-cpu-set:
>>>>         cpu: [ "all" ]
>>>>         prio:
>>>>            default: "medium"
>>>>
>>>>
>>>>
>>>>
>>>> I also uncommented the “threads: 3” under -detect-cpu-set
>>>>
>>>>
>>>> It is cpu13 that is running at 100%
>>>>
>>>>
>>>
>>>
>>> How do you start Suricata?
>>> What does your pf-ring section in suricata.yaml look like?
>>>
>>> thanks
>>>
>>> --
>>> Regards,
>>> Peter Manev
>>> _______________________________________________
>>> Suricata IDS Users mailing list: oisf-users at openinfosecfoundation.org
>>> Site: http://suricata-ids.org | Support: http://suricata-ids.org/support/
>>> List: https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>>> OISF: http://www.openinfosecfoundation.org/
>
>
>
> --
> Regards,
> Peter Manev



More information about the Oisf-users mailing list