[Oisf-users] suricata + netmap rss queue/packets per second

vincent.ma at gmx.fr vincent.ma at gmx.fr
Tue Oct 16 16:32:28 UTC 2018


Hello,

Can you help me please :

---|ATTACKER|--10gb--|SURICATA|--1gb--|SERVER|---

ATTACKER (debian 9.5) : 4 cores / 8GB / Intel 10Gb 710XL
SURICATA (debian 9.5) : 32 cores / 8GB / Intel 10Gb 710XL

netmap:

 - interface: eth0
   threads : auto
   disable-promisc: no
   checksum-checks: auto
   copy-mode: ips
   copy-iface: eth2

 - interface: eth2
   threads : auto
   disable-promisc: no
   checksum-checks: auto
   copy-mode: ips
   copy-iface: eth0

I use suricata with netmap (1 RSS queue). I configured the rule : drop tcp any any -> any any

- I simulated a DoS attack with "hping3 -S -p 80 --flood 192.168.12.80", I can get :  1125005 pkts/s, the server responds to ping : <1ms

- With "hping3 -S -p 80 --flood --rand-source 192.168.12.80" ping does not work, if I decrease the number of packets per second "hping3 -S -p 80 -i u8 --rand-source 192.168.12.80", 
I can get : 69692 pkts/s / 1 cpu : 100% / ping works ~14ms

I tried to change the values : 

managers: 2 # default to one flow manager
recyclers: 2 # default to one flow recycler thread

  cpu-affinity:
    - management-cpu-set:
        cpu: [ all ]  # include only these cpus in affinity settings
    - receive-cpu-set:
        cpu: [ all ]  # include only these cpus in affinity settings
    - worker-cpu-set:
        cpu: [ "all" ]
        mode: "balanced"
        # Use explicitely 3 threads and don't compute number by using
        # detect-thread-ratio variable:
        # threads: 3
        prio:
          low: [ 0 ]
          medium: [ "1-2" ]
          high: [ 3 ]
          default: "low"
    #- verdict-cpu-set:
    #    cpu: [ 0 ]
    #    prio:
    #      default: "high"

root at suricata:~# suricata --netmap -vvv --runmode=workers
16/10/2018 -- 17:58:29 - <Info> - Configuration node 'cluster-id' redefined.
16/10/2018 -- 17:58:29 - <Info> - Configuration node 'cluster-type' redefined.
16/10/2018 -- 17:58:29 - <Info> - Configuration node 'defrag' redefined.
16/10/2018 -- 17:58:29 - <Notice> - This is Suricata version 4.0.5 RELEASE
16/10/2018 -- 17:58:29 - <Info> - CPUs/cores online: 32
16/10/2018 -- 17:58:29 - <Config> - Adding interface eth0 from config file
16/10/2018 -- 17:58:29 - <Config> - Adding interface eth2 from config file
16/10/2018 -- 17:58:29 - <Info> - Netmap: Setting IPS mode
16/10/2018 -- 17:58:29 - <Config> - 'default' server has 'request-body-minimal-inspect-size' set to 31805 and 'request-body-inspect-window' set to 4242 after randomization.
16/10/2018 -- 17:58:29 - <Config> - 'default' server has 'response-body-minimal-inspect-size' set to 41829 and 'response-body-inspect-window' set to 16014 after randomization.
16/10/2018 -- 17:58:29 - <Config> - DNS request flood protection level: 500
16/10/2018 -- 17:58:29 - <Config> - DNS per flow memcap (state-memcap): 524288
16/10/2018 -- 17:58:29 - <Config> - DNS global memcap: 16777216
16/10/2018 -- 17:58:29 - <Config> - Protocol detection and parser disabled for modbus protocol.
16/10/2018 -- 17:58:29 - <Config> - Protocol detection and parser disabled for enip protocol.
16/10/2018 -- 17:58:29 - <Config> - Protocol detection and parser disabled for DNP3.
16/10/2018 -- 17:58:29 - <Info> - Found an MTU of 1500 for 'eth0'
16/10/2018 -- 17:58:29 - <Info> - Found an MTU of 1500 for 'eth0'
16/10/2018 -- 17:58:29 - <Info> - Found an MTU of 1500 for 'eth2'
16/10/2018 -- 17:58:29 - <Info> - Found an MTU of 1500 for 'eth2'
16/10/2018 -- 17:58:29 - <Config> - allocated 262144 bytes of memory for the host hash... 4096 buckets of size 64
16/10/2018 -- 17:58:29 - <Config> - preallocated 1000 hosts of size 136
16/10/2018 -- 17:58:29 - <Config> - host memory usage: 398144 bytes, maximum: 33554432
16/10/2018 -- 17:58:29 - <Config> - Core dump size set to unlimited.
16/10/2018 -- 17:58:29 - <Config> - allocated 3670016 bytes of memory for the defrag hash... 65536 buckets of size 56
16/10/2018 -- 17:58:29 - <Config> - preallocated 65535 defrag trackers of size 168
16/10/2018 -- 17:58:29 - <Config> - defrag memory usage: 14679896 bytes, maximum: 536870912
16/10/2018 -- 17:58:29 - <Config> - stream "prealloc-sessions": 2048 (per thread)
16/10/2018 -- 17:58:29 - <Config> - stream "memcap": 67108864
16/10/2018 -- 17:58:29 - <Config> - stream "midstream" session pickups: disabled
16/10/2018 -- 17:58:29 - <Config> - stream "async-oneside": disabled
16/10/2018 -- 17:58:29 - <Config> - stream "checksum-validation": enabled
16/10/2018 -- 17:58:29 - <Config> - stream."inline": enabled
16/10/2018 -- 17:58:29 - <Config> - stream "bypass": disabled
16/10/2018 -- 17:58:29 - <Config> - stream "max-synack-queued": 5
16/10/2018 -- 17:58:29 - <Config> - stream.reassembly "memcap": 268435456
16/10/2018 -- 17:58:29 - <Config> - stream.reassembly "depth": 1048576
16/10/2018 -- 17:58:29 - <Config> - stream.reassembly "toserver-chunk-size": 2559
16/10/2018 -- 17:58:29 - <Config> - stream.reassembly "toclient-chunk-size": 2533
16/10/2018 -- 17:58:29 - <Config> - stream.reassembly.raw: enabled
16/10/2018 -- 17:58:29 - <Config> - stream.reassembly "segment-prealloc": 2048
16/10/2018 -- 17:58:29 - <Config> - Delayed detect disabled
16/10/2018 -- 17:58:29 - <Info> - Running in live mode, activating unix socket
16/10/2018 -- 17:58:29 - <Config> - pattern matchers: MPM: ac, SPM: bm
16/10/2018 -- 17:58:29 - <Config> - grouping: tcp-whitelist (default) 53, 80, 139, 443, 445, 1433, 3306, 3389, 6666, 6667, 8080
16/10/2018 -- 17:58:29 - <Config> - grouping: udp-whitelist (default) 53, 135, 5060
16/10/2018 -- 17:58:29 - <Config> - prefilter engines: MPM
16/10/2018 -- 17:58:29 - <Config> - IP reputation disabled
16/10/2018 -- 17:58:29 - <Config> - Loading rule file: /usr/local/etc/suricata/rules/test.rules
16/10/2018 -- 17:58:29 - <Info> - 1 rule files processed. 1 rules successfully loaded, 0 rules failed
16/10/2018 -- 17:58:29 - <Info> - Threshold config parsed: 0 rule(s) found
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for tcp-packet
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for tcp-stream
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for udp-packet
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for other-ip
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for http_uri
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for http_request_line
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for http_client_body
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for http_response_line
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for http_header
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for http_header
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for http_header_names
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for http_header_names
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for http_accept
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for http_accept_enc
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for http_accept_lang
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for http_referer
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for http_connection
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for http_content_len
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for http_content_len
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for http_content_type
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for http_content_type
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for http_protocol
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for http_protocol
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for http_start
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for http_start
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for http_raw_header
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for http_raw_header
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for http_method
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for http_cookie
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for http_cookie
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for http_raw_uri
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for http_user_agent
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for http_host
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for http_raw_host
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for http_stat_msg
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for http_stat_code
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for dns_query
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for tls_sni
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for tls_cert_issuer
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for tls_cert_subject
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for tls_cert_serial
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for dce_stub_data
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for dce_stub_data
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for ssh_protocol
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for ssh_protocol
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for ssh_software
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for ssh_software
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for file_data
16/10/2018 -- 17:58:29 - <Perf> - using shared mpm ctx' for file_data
16/10/2018 -- 17:58:29 - <Info> - 1 signatures processed. 1 are IP-only rules, 0 are inspecting packet payload, 0 inspect application layer, 0 are decoder event only
16/10/2018 -- 17:58:29 - <Config> - building signature grouping structure, stage 1: preprocessing rules... complete
16/10/2018 -- 17:58:29 - <Perf> - TCP toserver: 0 port groups, 0 unique SGH's, 0 copies
16/10/2018 -- 17:58:29 - <Perf> - TCP toclient: 0 port groups, 0 unique SGH's, 0 copies
16/10/2018 -- 17:58:29 - <Perf> - UDP toserver: 0 port groups, 0 unique SGH's, 0 copies
16/10/2018 -- 17:58:29 - <Perf> - UDP toclient: 0 port groups, 0 unique SGH's, 0 copies
16/10/2018 -- 17:58:29 - <Perf> - OTHER toserver: 0 proto groups, 0 unique SGH's, 0 copies
16/10/2018 -- 17:58:29 - <Perf> - OTHER toclient: 0 proto groups, 0 unique SGH's, 0 copies
16/10/2018 -- 17:58:29 - <Perf> - Unique rule groups: 0
16/10/2018 -- 17:58:29 - <Perf> - Builtin MPM "toserver TCP packet": 0
16/10/2018 -- 17:58:29 - <Perf> - Builtin MPM "toclient TCP packet": 0
16/10/2018 -- 17:58:29 - <Perf> - Builtin MPM "toserver TCP stream": 0
16/10/2018 -- 17:58:29 - <Perf> - Builtin MPM "toclient TCP stream": 0
16/10/2018 -- 17:58:29 - <Perf> - Builtin MPM "toserver UDP packet": 0
16/10/2018 -- 17:58:29 - <Perf> - Builtin MPM "toclient UDP packet": 0
16/10/2018 -- 17:58:29 - <Perf> - Builtin MPM "other IP packet": 0
16/10/2018 -- 17:58:29 - <Info> - fast output device (regular) initialized: fast.log
16/10/2018 -- 17:58:29 - <Info> - eve-log output device (regular) initialized: eve.json
16/10/2018 -- 17:58:29 - <Config> - enabling 'eve-log' module 'alert'
16/10/2018 -- 17:58:29 - <Config> - enabling 'eve-log' module 'http'
16/10/2018 -- 17:58:29 - <Config> - enabling 'eve-log' module 'dns'
16/10/2018 -- 17:58:29 - <Config> - enabling 'eve-log' module 'tls'
16/10/2018 -- 17:58:29 - <Config> - enabling 'eve-log' module 'files'
16/10/2018 -- 17:58:29 - <Config> - enabling 'eve-log' module 'smtp'
16/10/2018 -- 17:58:29 - <Config> - enabling 'eve-log' module 'ssh'
16/10/2018 -- 17:58:29 - <Config> - enabling 'eve-log' module 'stats'
16/10/2018 -- 17:58:29 - <Config> - enabling 'eve-log' module 'flow'
16/10/2018 -- 17:58:29 - <Info> - stats output device (regular) initialized: stats.log
16/10/2018 -- 17:58:29 - <Config> - Found affinity definition for "management-cpu-set"
16/10/2018 -- 17:58:29 - <Config> - Found affinity definition for "receive-cpu-set"
16/10/2018 -- 17:58:29 - <Config> - Found affinity definition for "worker-cpu-set"
16/10/2018 -- 17:58:29 - <Config> - Using default prio 'low' for set 'worker-cpu-set'
16/10/2018 -- 17:58:29 - <Info> - Found 1 RX RSS queues for 'eth0'
16/10/2018 -- 17:58:29 - <Info> - Found 1 RX RSS queues for 'eth2'
16/10/2018 -- 17:58:29 - <Perf> - Using 1 threads for interface eth0
16/10/2018 -- 17:58:29 - <Info> - Going to use 1 thread(s)
16/10/2018 -- 17:58:29 - <Perf> - Setting prio 2 for thread "W#01-eth0", thread id 5266
16/10/2018 -- 17:58:29 - <Perf> - Enabling zero copy mode for eth0->eth2
16/10/2018 -- 17:58:29 - <Info> - Found 1 RX RSS queues for 'eth2'
16/10/2018 -- 17:58:29 - <Info> - Found 1 RX RSS queues for 'eth0'
16/10/2018 -- 17:58:29 - <Perf> - Using 1 threads for interface eth2
16/10/2018 -- 17:58:29 - <Info> - Going to use 1 thread(s)
16/10/2018 -- 17:58:29 - <Perf> - Setting prio 2 for thread "W#01-eth2", thread id 5267
16/10/2018 -- 17:58:29 - <Perf> - Enabling zero copy mode for eth2->eth0
16/10/2018 -- 17:58:29 - <Config> - using 2 flow manager threads
16/10/2018 -- 17:58:29 - <Perf> - Setting prio 0 for thread "FM#01", thread id 5268
16/10/2018 -- 17:58:29 - <Perf> - Setting prio 0 for thread "FM#02", thread id 5269
16/10/2018 -- 17:58:29 - <Config> - using 2 flow recycler threads
16/10/2018 -- 17:58:29 - <Perf> - Setting prio 0 for thread "FR#01", thread id 5270
16/10/2018 -- 17:58:29 - <Perf> - Setting prio 0 for thread "FR#02", thread id 5271
16/10/2018 -- 17:58:29 - <Perf> - Setting prio 0 for thread "CW", thread id 5272
16/10/2018 -- 17:58:29 - <Perf> - Setting prio 0 for thread "CS", thread id 5273
16/10/2018 -- 17:58:29 - <Info> - Running in live mode, activating unix socket
16/10/2018 -- 17:58:29 - <Info> - Using unix socket file '/usr/local/var/run/suricata/suricata-command.socket'
16/10/2018 -- 17:58:29 - <Perf> - Setting prio 0 for thread "US", thread id 5274
16/10/2018 -- 17:58:29 - <Notice> - all 2 packet processing threads, 6 management threads initialized, engine started.

I also tried to increase the number of RSS queue to 2, suricata starts, but no traffic is detected.

Thank you in advance for your help.


More information about the Oisf-users mailing list