[Oisf-users] Suricata 2Gbit/s traffic drops on AWS

徐慧 xuh881026 at gmail.com
Fri Aug 23 02:07:25 UTC 2019


hi, team:
     Since AWS traffic mirroring uses a VxLAN tunnel, I have to use the
5.0dev version. i deployed Sruicata on AWS, but recently noticed that
'capture. Kernel_drops' appears in stats.log when traffic reaches 2Gbit/s.
I tried rsync a large file, 'capture. Kernel_drops' appears in stats.log.
default loading ET rules.
     I hope anyone can help me, any advice is good! Guys, I need your help
very much.

    # Client rsync files
    $ rsync -trovpgP xxx.tgz /usr/local/data/xxx.tgz
    sending incremental file list
    xxx.tgz
    3,361,243,136  51%  114.14MB/s    0:00:27

    # Suricata Server:
    $ suricata --af-packet -c /etc/suricata/suricata.yaml
    [24073] 23/8/2019 -- 01:51:19 - (tm-threads.c:2145) <Notice>
(TmThreadWaitOnThreadInit) -- all 14 packet processing threads, 4
management threads initialized, engine started.
    [24073] 23/8/2019 -- 01:53:58 - (suricata.c:2851) <Notice>
(SuricataMainLoop) -- Signal Received.  Stopping engine.
    [24073] 23/8/2019 -- 01:54:01 - (util-device.c:317) <Notice>
(LiveDeviceListClean) -- Stats for 'ens5':  pkts: 11270384, drop: 2046365
(18.16%), invalid chksum: 0

    According to the official documentation, I made some optimizations.

https://suricata.readthedocs.io/en/latest/performance/packet-capture.html#rss
    But I can't set RSS queues to 1
    ethtool -L ens5 combined 1
    Cannot set device channel parameters: Operation not supported

    Amazon EC2 C5
    EC2 Hardware:
    RAM: 32G
    CPU(single): 16 Core (Intel(R) Xeon(R) Platinum 8124M CPU @ 3.00GHz)
    NIC:
        ethtool -l ens5
        Channel parameters for ens5:
        Pre-set maximums:
        RX: 8
        TX: 8
        Other: 0
        Combined: 0
        Current hardware settings:
        RX: 8
        TX: 8
        Other: 0
        Combined: 0

        ethtool -i ens5
        driver: ena
        version: 2.0.3K
        firmware-version:
        expansion-rom-version:
        bus-info: 0000:00:05.0
        supports-statistics: yes
        supports-test: no
        supports-eeprom-access: no
        supports-register-dump: no
        supports-priv-flags: no

    Suricata Version: 5.0.0-dev (3a912446a 2019-07-22)
    Suricata Config:
        af-packet:
        - interface: ens5
            threads: 14
            cluster-id: 99
            cluster-type: cluster_flow
            defrag: yes    # Default AF_PACKET cluster type. AF_PACKET can
load balance per flow or per hash.
            use-mmap: yes
            mmap-locked: yes
            tpacket-v3: yes
            ring-size: 400000
            block-size: 393216
            #block-timeout: 10
            #use-emergency-flush: yes
            # buffer-size: 32768
            # disable-promisc: no
            #checksum-checks: kernel
            #bpf-filter: port 80 or udp
            #copy-mode: ips
            #copy-iface: eth1

        - interface: default
            threads: auto
            use-mmap: yes
            tpacket-v3: yes

        max-pending-packets: 1024
        runmode: workers
        default-packet-size: 1522

        defrag:
            memcap: 4gb
            hash-size: 65536
            trackers: 65535 # number of defragmented flows to follow
            max-frags: 65535 # number of fragments to keep (higher than
trackers)
            prealloc: yes
            timeout: 60

        flow:
            memcap: 4gb
            hash-size: 1048576
            prealloc: 1048576
            emergency-recovery: 30

        stream:
        memcap: 4gb
        checksum-validation: no
        inline: no
        bypass: yes
        reassembly:
            memcap: 8gb
            depth: 1mb
            toserver-chunk-size: 2560
            toclient-chunk-size: 2560
            randomize-chunk-size: yes


        detect:
            profile: custom
            custom-values:
                toclient-groups: 200
                toserver-groups: 200
            sgh-mpm-context: auto
            inspection-recursion-limit: 3000

        mpm-algo: hs
        spm-algo: hs

        threading:
        set-cpu-affinity: yes
        cpu-affinity:
            - management-cpu-set:
                cpu: [ "0-1" ]
                mode: "balanced"
                prio:
                default: "medium"
            - worker-cpu-set:
                cpu: [ "2-15" ]
                mode: "exclusive"
                prio:
                default: "high"
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20190823/6de49b71/attachment-0001.html>


More information about the Oisf-users mailing list