[Oisf-users] High Suricata capture.kernel_drops

Cloherty, Sean E scloherty at mitre.org
Wed Jul 11 14:53:36 UTC 2018


Hello Fatema -



SEPTun is a great resource for sure and from that you might want to focus first on the CPU affinity and only include those in the same NUMA node as the NIC for workers.  (See SEPTun page 14)



Some other quick hits –



  *   Set threads to auto and specify which CPUs (by number or range of #s) instead of “all” for the workers to use.  Also – I think you can choose to use CPUs not on the same NUMA node for the management-cpu-set so you can save the rest for workers.
  *   Install the NIC driver from Intel
  *   In AF-PACKET – enable tpacketv3
  *   Change the MPM-ALGO to AC-KS
  *   Stop automated IRQ balancing - Add “killall irqbalance” to your startup script
  *   Check in the stats.log to see if you are running up against any memcap overruns and increase the RAM set aside for that especially STREAM MEMCAP
and  STREAM REASSEMBLY MEMCAP



As to RAM –I’ve attached a calculator which you can use to see how much memory will be used when you vary the values for various settings.





Sean


From: Oisf-users [mailto:oisf-users-bounces at lists.openinfosecfoundation.org] On Behalf Of fatema bannatwala
Sent: Tuesday, July 10, 2018 15:01 PM
To: oisf-users at lists.openinfosecfoundation.org
Subject: [Oisf-users] High Suricata capture.kernel_drops

Hi,

I am pretty new to Suricata and started to play around with it.
I have Suricata 4.0.4 running on a CentOS7 box, that has 20 cores (40 on-line cpus) and an intel  X710 NIC, and 64GB RAM.

I am using AF_Packet with following settings, with some other mentioned settings:

# Linux high speed capture support
af-packet:
  - interface: em1
    threads: 24
    cluster-id: 99
    cluster-type: cluster_cpu
    defrag: yes
    use-mmap: yes
    ring-size: 30000

......

max-pending-packets: 10000
runmode: workers
mpm-algo: auto
threading:
  set-cpu-affinity: yes
  cpu-affinity:
    - management-cpu-set:
        cpu: [ "all" ]  # include only these cpus in affinity settings
        mode: "balanced"
        prio:
          default: "low"
    - receive-cpu-set:
        cpu: [ 0 ]  # include only these cpus in affinity settings
    - worker-cpu-set:
        cpu: [ "all" ]
        mode: "exclusive"
        prio:
          low: [ 0 ]
          medium: [ "1-2" ]
          high: [ 3 ]
          default: "medium"

detect-thread-ratio: 1.0


I am monitoring a ~5GBps link and getting high kernel_drop packets seen in stats.log:
capture.kernel_packets                     | Total                     | 301360376
capture.kernel_drops                       | Total                     | 67468903

Any idea how can I reduce the kernel drop rate of packets? or how can I check if af_packet threads are working correctly?

I have also disabled the checksuming on the ethernet interface:
# ethtool -K em1 rx off tx off tso off sg off gso off gro off

Any help appreciated.

Thanks,
Fatema.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20180711/5be48dcd/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: SuricataMemCalcNoPreAlloc.ods
Type: application/oleobject
Size: 5855 bytes
Desc: SuricataMemCalcNoPreAlloc.ods
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20180711/5be48dcd/attachment-0001.bin>


More information about the Oisf-users mailing list