[Oisf-users] [Discussion] Suricata Performance Tuning (kernel_drops very high)
Jay M.
jskier at gmail.com
Tue Jan 13 13:44:19 UTC 2015
Also suggest looking into testing a good bpf filter to cull down on
noisy and irrelevant traffic for that kind of volume.
Curious which distro / kernel are you using?
--
Jay
jskier at gmail.com
On Tue, Jan 13, 2015 at 6:32 AM, Victor Julien <lists at inliniac.net> wrote:
> On 01/13/2015 01:02 PM, Victor Julien wrote:
>> On 01/12/2015 05:22 PM, Barkley, Joey wrote:
>>> All,
>>>
>>> I am running Suricata and have done my best to configure it properly but I’m failing. We are getting lots of traffic logged, but I am seeing loads of kernel_drops. Can someone please tell me how I might tweak performance to reduce loss? I’m very new to Suricata and fairly new to IDS setup in general. Here is our current setup:
>>>
>>> 32 Core System
>>> 256GB RAM
>>> 1Gbps Management Interface
>>> 2x10Gbps Monitoring Interface (but currently only 1 is in use)
>>>
>>> right now we are using around 82GB RAM. 38% CPU usage. Status entries pasted at the end of the message.
>>>
>>> Here is some of my suricata.yaml config. If I should provide additional sections just let me know.
>>> # Output file configuration
>>> outputs:
>>> - eve-log:
>>> enabled: yes
>>> filetype: regular
>>> filename: edge-int-lv.evejson
>>> types:
>>> - alert:
>>> payload: yes
>>> packet: yes
>>> http: yes
>>> - http:
>>> extended: yes
>>> - dns
>>> - tls:
>>> extended: yes
>>> - files:
>>> force-magic: yes
>>> force-md5: yes
>>> - ssh
>>> - flow
>>> - netflow
>
> You may want to disable a couple of the logs (esp dns) to see if that
> helps performance.
>
>>> - stats:
>>> enabled: yes
>>> filename: stats-edge-int-lv.log
>>> interval: 8
>>> - fast: # a line based alerts log similar to Snort's fast.log
>>> enabled: yes
>>> filename: fast-edge-int-lv.log
>>> append: yes
>>> filetype: regular # 'regular', 'unix_stream' or ‘unix_dgram'
>>>
>>> threading:
>>> set-cpu-affinity: yes
>>> cpu-affinity:
>>> - management-cpu-set:
>>> cpu: [ "all" ] # include only these cpus in affinity settings
>>> mode: "balanced"
>>> prio:
>>> default: "low"
>>> - receive-cpu-set:
>>> cpu: [ "all" ] # include only these cpus in affinity settings
>>> - detect-cpu-set:
>>> cpu: [ "all" ]
>>> mode: "exclusive" # run detect threads in these cpus
>>> prio:
>>> default: "high"
>>> detect-thread-ratio: 1.5
>>>
>>> max-pending-packets: 2048
>
> This is low. Try upping to 60000 or so.
>
>>>
>>> runmode: autofp
>
> Almost everyone reports better perf with 'workers'.
>
>
>>>
>>> host-mode: sniffer-only
>>>
>>> af-packet:
>>> - interface: p4p1
>>> threads: 16
>>> cluster-id: 99
>>> cluster-type: cluster_cpu
>
> cluster_cpu will require properly setup drivers and such. I recommend
> cluster_flow unless you're certain you've set everything up correctly.
>
> [..snip..]
>
>>> -------------------------------------------------------------------
>>> Counter | TM Name | Value
>>> -------------------------------------------------------------------
>>> capture.kernel_packets | RxPcapp4p11 | 3408330077
>>> capture.kernel_drops | RxPcapp4p11 | 3532275578
>>> capture.kernel_ifdrops | RxPcapp4p11 | 0
>>> dns.memuse | RxPcapp4p11 | 3681302
>>> dns.memcap_state | RxPcapp4p11 | 23601
>>> dns.memcap_global | RxPcapp4p11 | 0
>>> decoder.pkts | RxPcapp4p11 | 25645856945
>>> decoder.bytes | RxPcapp4p11 | 17615424414799
>>> decoder.invalid | RxPcapp4p11 | 3
>>> decoder.ipv4 | RxPcapp4p11 | 25645892638
>>> decoder.ipv6 | RxPcapp4p11 | 38560
>>> decoder.ethernet | RxPcapp4p11 | 25645856945
>>> decoder.raw | RxPcapp4p11 | 0
>>> decoder.sll | RxPcapp4p11 | 0
>>> decoder.tcp | RxPcapp4p11 | 24557853433
>>> decoder.udp | RxPcapp4p11 | 1039077879
>>> decoder.sctp | RxPcapp4p11 | 0
>>> decoder.icmpv4 | RxPcapp4p11 | 37915322
>>> decoder.icmpv6 | RxPcapp4p11 | 841
>>> decoder.ppp | RxPcapp4p11 | 0
>>> decoder.pppoe | RxPcapp4p11 | 0
>>> decoder.gre | RxPcapp4p11 | 0
>>> decoder.vlan | RxPcapp4p11 | 0
>>> decoder.vlan_qinq | RxPcapp4p11 | 0
>>> decoder.teredo | RxPcapp4p11 | 37722
>>> decoder.ipv4_in_ipv6 | RxPcapp4p11 | 0
>>> decoder.ipv6_in_ipv6 | RxPcapp4p11 | 0
>>> decoder.mpls | RxPcapp4p11 | 0
>>> decoder.avg_pkt_size | RxPcapp4p11 | 686
>>> decoder.max_pkt_size | RxPcapp4p11 | 1514
>>> defrag.ipv4.fragments | RxPcapp4p11 | 10923631
>>> defrag.ipv4.reassembled | RxPcapp4p11 | 244568
>>> defrag.ipv4.timeouts | RxPcapp4p11 | 0
>>> defrag.ipv6.fragments | RxPcapp4p11 | 0
>>> defrag.ipv6.reassembled | RxPcapp4p11 | 0
>>> defrag.ipv6.timeouts | RxPcapp4p11 | 0
>>> defrag.max_frag_hits | RxPcapp4p11 | 0
>>> tcp.sessions | Detect | 73940345
>>> tcp.ssn_memcap_drop | Detect | 0
>>> tcp.pseudo | Detect | 4049413
>>> tcp.pseudo_failed | Detect | 0
>>> tcp.invalid_checksum | Detect | 0
>>> tcp.no_flow | Detect | 0
>>> tcp.reused_ssn | Detect | 535819
>>> tcp.memuse | Detect | 25347440
>>> tcp.syn | Detect | 83940125
>>> tcp.synack | Detect | 36430536
>
> Are you seeing the full traffic? SYN/ACK is less than half of SYN. Could
> be SYN floods as well, but otherwise it may indicate capture issues.
>
> --
> ---------------------------------------------
> Victor Julien
> http://www.inliniac.net/
> PGP: http://www.inliniac.net/victorjulien.asc
> ---------------------------------------------
>
> _______________________________________________
> Suricata IDS Users mailing list: oisf-users at openinfosecfoundation.org
> Site: http://suricata-ids.org | Support: http://suricata-ids.org/support/
> List: https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
> Training now available: http://suricata-ids.org/training/
More information about the Oisf-users
mailing list