[Oisf-users] FlowManagerThread idle CPU usage

elof2 at sentor.se elof2 at sentor.se
Mon Mar 21 11:05:51 UTC 2016


Should the FlowManagerThread in Suricata really use CPU resources when 
there are zero captures packets?

My sensor is completely silent (zero packets on ix1), but 'top -PSHz' show 
a constant CPU utilization of 2.69% for the FlowManagerThread.


The box itself is idling. No traffic in or out. No SPAN is sent to it.
So there's just a suricata running, doing nothing.
Yet, it constantly consumes ~2.7% of one CPU.

#top -PSHz
last pid: 16784;  load averages:  0.28,  0.13,  0.09 
up 5+18:27:50  11:42:32
191 processes: 9 running, 126 sleeping, 56 waiting
CPU 0:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 1:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 2:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 3:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 4:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 5:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 6:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 7:  2.6% user,  0.0% nice,  0.0% system,  0.0% interrupt, 97.4% idle
Mem: 140M Active, 565M Inact, 587M Wired, 509M Buf, 14G Free
Swap: 3598M Total, 3598M Free

   PID USERNAME   PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
  1185 root        21    0   778M   677M uwait   7 256:53   2.69% 
suricata{FlowManagerThre}
  1185 root        20    0   778M   677M nanslp  3   5:12   0.00% 
suricata{suricata}
    12 root       -60    -     0K   896K WAIT    4   3:03   0.00% 
intr{swi4: clock}
     0 root       -16    0     0K   832K swapin  0   0:46   0.00% 
kernel{swapper}
    18 root        16    -     0K    16K syncer  4   0:24   0.00% syncer
    14 root       -16    -     0K    16K -       5   0:21   0.00% 
rand_harvestq
  1146 root        20    0 21868K 13776K select  4   0:08   0.00% ntpd
    12 root       -88    -     0K   896K WAIT    1   0:05   0.00% 
intr{irq282: xhci0}
  1185 root        20    0   778M   677M uwait   3   0:04   0.00% 
suricata{StatsWakeupThre}
    15 root       -68    -     0K   192K -       3   0:04   0.00% 
usb{usbus0}
     5 root       -16    -     0K    16K psleep  2   0:04   0.00% 
pagedaemon
  1185 root        20    0   778M   677M bpf     5   0:03   0.00% 
suricata{RxPcapix11}
    12 root       -88    -     0K   896K WAIT    6   0:03   0.00% 
intr{irq22: ehci1}
    15 root       -68    -     0K   192K -       5   0:03   0.00% 
usb{usbus2}
  1185 root        20    0   778M   677M uwait   4   0:03   0.00% 
suricata{FlowRecyclerThr}
  1185 root        20    0   778M   677M uwait   7   0:03   0.00% 
suricata{StatsMgmtThread}
    12 root       -92    -     0K   896K WAIT    0   0:03   0.00% 
intr{irq283: igb0:que}
    16 root       -16    -     0K    16K tzpoll  1   0:03   0.00% 
acpi_thermal
    15 root       -68    -     0K   192K -       3   0:03   0.00% 
usb{usbus1}
    12 root       -88    -     0K   896K WAIT    5   0:03   0.00% 
intr{irq16: ehci0}
    15 root       -68    -     0K   192K -       1   0:03   0.00% 
usb{usbus1}
    15 root       -68    -     0K   192K -       7   0:03   0.00% 
usb{usbus2}
    15 root       -68    -     0K   192K -       5   0:02   0.00% 
usb{usbus0}
     8 root        20    -     0K    32K sdflus  5   0:01   0.00% 
bufdaemon{/ worker}
    12 root       -92    -     0K   896K WAIT    2   0:01   0.00% 
intr{irq287: igb0:lin}
  1185 root        20    0   778M   677M uwait   0   0:01   0.00% 
suricata{Detect1}
  1185 root        20    0   778M   677M uwait   1   0:01   0.00% 
suricata{Detect3}
  1185 root        20    0   778M   677M uwait   5   0:01   0.00% 
suricata{Detect2}
  1185 root        20    0   778M   677M uwait   2   0:01   0.00% 
suricata{Detect4}
    37 root        -8    -     0K    32K arc_re  2   0:01   0.00% 
zfskern{arc_reclaim_thre}
  1185 root        20    0   778M   677M uwait   5   0:01   0.00% 
suricata{Detect5}
  1185 root        20    0   778M   677M uwait   6   0:01   0.00% 
suricata{Detect6}
  1185 root        20    0   778M   677M uwait   1   0:01   0.00% 
suricata{Detect7}
  1185 root        20    0   778M   677M uwait   0   0:01   0.00% 
suricata{Detect8}
  1185 root        20    0   778M   677M uwait   1   0:01   0.00% 
suricata{Detect11}
  1185 root        20    0   778M   677M uwait   4   0:01   0.00% 
suricata{Detect9}
  1185 root        20    0   778M   677M uwait   2   0:01   0.00% 
suricata{Detect10}
  1185 root        20    0   778M   677M uwait   4   0:01   0.00% 
suricata{Detect12}
...snip, everything is 0.00%...


/Elof




---

More details:

This is a test-sensor.
I'm running suricata 3.0 RELEASE on FreeBSD 10.1 amd64.
I'm running in NETMAP mode.
suricata.yaml:

%YAML 1.1
---
host-mode: sniffer-only
default-packet-size: 1518
default-log-dir: /var/log/suricata/
unix-command:
   enabled: no
stats:
   enabled: yes
   interval: 8
outputs:
   - fast:
       enabled: yes
       filename: fast.log
       append: yes
   - eve-log:
       enabled: no
       filetype: regular #regular|syslog|unix_dgram|unix_stream|redis
       filename: eve.json
       types:
         - alert:
             xff:
               enabled: no
               mode: extra-data
               deployment: reverse
               header: X-Forwarded-For
         - http:
             extended: yes     # enable this for extended logging 
information
         - dns
         - tls:
             extended: yes     # enable this for extended logging 
information
         - files:
             force-magic: no   # force logging magic on all logged files
             force-md5: no     # force logging of md5 checksums
         - smtp:
         - ssh
         - stats:
             totals: yes       # stats for all threads merged together
             threads: no       # per thread stats
             deltas: no        # include delta values
   - unified2-alert:
       enabled: no
       filename: unified2.alert
       xff:
         enabled: no
         mode: extra-data
         deployment: reverse
         header: X-Forwarded-For
   - http-log:
       enabled: no
       filename: http.log
       append: yes
   - tls-log:
       enabled: no  # Log TLS connections.
       filename: tls.log # File to store TLS logs.
       append: yes
   - tls-store:
       enabled: no
   - dns-log:
       enabled: no
       filename: dns.log
       append: yes
   - pcap-log:
       enabled:  no
       filename: log.pcap
       limit: 1000mb
       max-files: 2000
       mode: normal # normal, multi or sguil.
       use-stream-depth: no #If set to "yes" packets seen after reaching 
stream inspection depth are ignored. "no" logs all packets
       honor-pass-rules: no # If set to "yes", flows in which a pass rule 
matched will stopped being logged.
   - alert-debug:
       enabled: no
       filename: alert-debug.log
       append: yes
   - alert-prelude:
       enabled: no
       profile: suricata
       log-packet-content: no
       log-packet-header: yes
   - stats:
       enabled: yes
       filename: stats.log
       interval: 30
       totals: yes       # stats for all threads merged together
       threads: no       # per thread stats
   - syslog:
       enabled: no
       facility: local5
   - drop:
       enabled: no
       filename: drop.log
       append: yes
   - file-store:
       enabled: no       # set to yes to enable
       log-dir: files    # directory to store the files
       force-magic: no   # force logging magic on all stored files
       force-md5: no     # force logging of md5 checksums
   - file-log:
       enabled: no
       filename: files-json.log
       append: yes
       force-magic: no   # force logging magic on all logged files
       force-md5: no     # force logging of md5 checksums
   - tcp-data:
       enabled: no
       type: file
       filename: tcp-data.log
   - http-body-data:
       enabled: no
       type: file
       filename: http-data.log
   - lua:
       enabled: no
       scripts:
magic-file: /usr/share/misc/magic
nfq:
nflog:
   - group: 2
     buffer-size: 18432
   - group: default
     qthreshold: 1
     qtimeout: 100
     max-size: 20000
af-packet:
   - interface: eth0
     threads: auto
     cluster-id: 99
     cluster-type: cluster_flow
     defrag: yes
     use-mmap: yes
   - interface: eth1
     threads: auto
     cluster-id: 98
     cluster-type: cluster_flow
     defrag: yes
   - interface: default
netmap:
  - interface: ix1
    threads: auto
    checksum-checks: no
  - interface: default
legacy:
   uricontent: enabled
detect-engine:
   - profile: high
   - sgh-mpm-context: auto
   - inspection-recursion-limit: 3000
threading:
   set-cpu-affinity: no
   cpu-affinity:
     - management-cpu-set:
         cpu: [ 0 ]  # include only these cpus in affinity settings
     - receive-cpu-set:
         cpu: [ 0 ]  # include only these cpus in affinity settings
     - decode-cpu-set:
         cpu: [ 0, 1 ]
         mode: "balanced"
     - stream-cpu-set:
         cpu: [ "0-1" ]
     - detect-cpu-set:
         cpu: [ "all" ]
         mode: "exclusive" # run detect threads in these cpus
         prio:
           low: [ 0 ]
           medium: [ "1-2" ]
           high: [ 3 ]
           default: "high"
     - verdict-cpu-set:
         cpu: [ 0 ]
         prio:
           default: "high"
     - reject-cpu-set:
         cpu: [ 0 ]
         prio:
           default: "low"
     - output-cpu-set:
         cpu: [ "all" ]
         prio:
            default: "medium"
   detect-thread-ratio: 1.5
cuda:
   mpm:
     data-buffer-size-min-limit: 0
     data-buffer-size-max-limit: 1500
     cudabuffer-buffer-size: 500mb
     gpu-transfer-size: 50mb
     batching-timeout: 2000
     device-id: 0
     cuda-streams: 2
mpm-algo: ac
pattern-matcher:
   - b2g:
       search-algo: B2gSearchBNDMq
       hash-size: low
       bf-size: medium
   - b3g:
       search-algo: B3gSearchBNDMq
       hash-size: low
       bf-size: medium
   - wumanber:
       hash-size: low
       bf-size: medium
defrag:
   memcap: 512mb
   hash-size: 65536
   trackers: 65535 # number of defragmented flows to follow
   max-frags: 65535 # number of fragments to keep (higher than trackers)
   prealloc: yes
   timeout: 60
flow:
   memcap: 640mb
   hash-size: 1048576
   prealloc: 1048576
   emergency-recovery: 30
vlan:
   use-for-tracking: true
flow-timeouts:
   default:
     new: 30
     established: 300
     closed: 0
     emergency-new: 10
     emergency-established: 100
     emergency-closed: 0
   tcp:
     new: 60
     established: 3600
     closed: 120
     emergency-new: 10
     emergency-established: 300
     emergency-closed: 20
   udp:
     new: 30
     established: 300
     emergency-new: 10
     emergency-established: 100
   icmp:
     new: 30
     established: 300
     emergency-new: 10
     emergency-established: 100
stream:
   memcap: 1gb
   checksum-validation: no
   prealloc-sessions: 20000
   inline: no
   reassembly:
     memcap: 2gb
     depth: 1mb                  # reassemble 1mb into a stream
     toserver-chunk-size: 2560
     toclient-chunk-size: 2560
     randomize-chunk-size: yes
host:
   hash-size: 4096
   prealloc: 1000
   memcap: 16777216
logging:
   default-log-level: notice
   default-output-filter:
   outputs:
   - console:
       enabled: yes
   - file:
       enabled: yes
       filename: /var/log/suricata/suricata.log
   - syslog:
       enabled: no
       facility: local5
       format: "[%i] <%d> -- "
mpipe:
   load-balance: dynamic
   iqueue-packets: 2048
   inputs:
   - interface: xgbe2
   - interface: xgbe3
   - interface: xgbe4
   stack:
     size128: 0
     size256: 9
     size512: 0
     size1024: 0
     size1664: 7
     size4096: 0
     size10386: 0
     size16384: 0
pfring:
   - interface: eth0
     threads: 1
     cluster-id: 99
     cluster-type: cluster_flow
   - interface: default
pcap:
   - interface: eth0
   - interface: default
pcap-file:
   checksum-checks: auto
ipfw:
default-rule-path: /usr/local/etc/suricata/rules
rule-files:
  - sentor.rules
  - decoder-events.rules # available in suricata sources under rules dir
  - stream-events.rules  # available in suricata sources under rules dir
  - http-events.rules    # available in suricata sources under rules dir
  - smtp-events.rules    # available in suricata sources under rules dir
  - dns-events.rules     # available in suricata sources under rules dir
  - tls-events.rules     # available in suricata sources under rules dir
  - app-layer-events.rules  # available in suricata sources under rules dir
classification-file: /usr/local/etc/suricata/classification.config
reference-config-file: /usr/local/etc/suricata/reference.config
vars:
   address-groups:
     HOME_NET: "[192.168.0.0/16,10.0.0.0/8,172.16.0.0/12,176.124.224.0/23]"
     EXTERNAL_NET: "any"
     HTTP_SERVERS: "$HOME_NET"
     SMTP_SERVERS: "$HOME_NET"
     SQL_SERVERS: "$HOME_NET"
     DNS_SERVERS: "$HOME_NET"
     TELNET_SERVERS: "$HOME_NET"
     AIM_SERVERS: "$EXTERNAL_NET"
     DNP3_SERVER: "$HOME_NET"
     DNP3_CLIENT: "$HOME_NET"
     MODBUS_CLIENT: "$HOME_NET"
     MODBUS_SERVER: "$HOME_NET"
     ENIP_CLIENT: "$HOME_NET"
     ENIP_SERVER: "$HOME_NET"
     SIP_SERVERS: "$HOME_NET"
   port-groups:
     HTTP_PORTS: 
"[36,80,81,82,83,84,85,86,87,88,89,90,311,383,555,591,593,631,801,808,818,901,972,1158,1220,1414,1533,1741,1830,1942,2231,2301,2381,2578,2809,2980,3029,3037,3057,3128,3443,3702,4000,4343,4848,5000,5117,5250,5600,6080,6173,6988,7000,7001,7071,7144,7145,7510,7770,7777,7778,7779,8000,8008,8014,8028,8080,8081,8082,8085,8088,8090,8118,8123,8180,8181,8222,8243,8280,8300,8333,8344,8500,8509,8800,8888,8899,8983,9000,9060,9080,9090,9091,9111,9290,9443,9999,10000,10080,11371,12601,13014,15489,29991,33300,34412,34443,34444,41080,44449,50000,50002,51423,53331,55252,55555,56712]"
     FILE_DATA_PORTS: "[$HTTP_PORTS,110,143]"
     SIP_PORTS: "[5060,5061,5600]"
     SHELLCODE_PORTS: "any"
     ORACLE_PORTS: 1521
     SSH_PORTS: 22
     FTP_PORTS: "[21,2100,3535]"
     DNP3_PORTS: 20000
     MODBUS_PORTS: 502
host-os-policy:
   windows: [0.0.0.0/0]
asn1-max-frames: 256
engine-analysis:
   rules-fast-pattern: yes
   rules: yes
pcre:
   match-limit: 3500
   match-limit-recursion: 1500
app-layer:
   protocols:
     tls:
       enabled: yes
       detection-ports:
         dp: 443
     dcerpc:
       enabled: yes
     ftp:
       enabled: yes
     ssh:
       enabled: yes
     smtp:
       enabled: yes
       mime:
         decode-mime: yes
         decode-base64: yes
         decode-quoted-printable: yes
         header-value-depth: 2000
         extract-urls: yes
         body-md5: no
       inspected-tracker:
         content-limit: 1000
         content-inspect-min-size: 1000
         content-inspect-window: 1000
     imap:
       enabled: detection-only
     msn:
       enabled: detection-only
     smb:
       enabled: yes
       detection-ports:
         dp: 139
     modbus:
       enabled: no
       detection-ports:
         dp: 502
     dns:
       global-memcap: 32mb
       state-memcap: 512kb
       tcp:
         enabled: yes
         detection-ports:
           dp: 53
       udp:
         enabled: yes
         detection-ports:
           dp: 53
     http:
       enabled: yes
       memcap: 256mb
       libhtp:
          default-config:
            personality: IDS
            request-body-limit: 100kb
            response-body-limit: 100kb
            request-body-minimal-inspect-size: 32kb
            request-body-inspect-window: 4kb
            response-body-minimal-inspect-size: 40kb
            response-body-inspect-window: 16kb
            http-body-inline: auto
            double-decode-path: no
            double-decode-query: no
          server-config:
profiling:
   rules:
     enabled: yes
     filename: rule_perf.log
     append: yes
     sort: avgticks
     limit: 100
     json: true
   keywords:
     enabled: yes
     filename: keyword_perf.log
     append: yes
   packets:
     enabled: yes
     filename: packet_stats.log
     append: yes
     csv:
       enabled: no
       filename: packet_stats.csv
   locks:
     enabled: no
     filename: lock_stats.log
     append: yes
   pcap-log:
     enabled: no
     filename: pcaplog_stats.log
     append: yes
coredump:
   max-dump: unlimited
napatech:
     hba: -1
     use-all-streams: yes
     streams: [1, 2, 3]


More information about the Oisf-users mailing list