[Oisf-users] File Extraction Woes

Jason Batchelor jxbatchelor at gmail.com
Fri May 30 16:31:09 UTC 2014


Hello,

I am having some issues with file extraction in Suricata, and after
attempting to do many optimizations and review of others experiences I am
still finding myself out of luck. Below is some verbose output of my
current configuration and some sample data after ~12 hours of running. I
have also included a smaller time frame with a few subtle changes for
consideration.

Under this configuration the rule I have, which is an IP based rule below...

alert http any any -> $MY_LOCAL_IP any (msg:"FILE PDF"; filemagic:"PDF
document"; filestore; sid:1; rev:1;)

Does not trigger at all when the reassembly mem cap is reached. Even when
it does (when reassembly memcap is below the threshold), I get truncated
PDFs. I have tried adjusting things like the reassembly memcap, however,
when I do, I very quickly run into a large amount of packet loss because
the number of free slots PF_RING can issue is not able to keep up (details
below). Additionally, reassembly mem cap seems to slowly increase over
time, eventually reaching its peak before the number of free ring slots can
finally keep up (presumably due to segments being dropped).

I have struggled playing with time out values as well really to no avail
(details below).

When I turn http logging on, I do see the website that I go to being
properly logged fwiw.

I feel like I must be doing something wrong, or I am not seeing something
obvious. After reviewing many blogs and howtos, it seems folks are able to
do what I am trying to accomplish with the same (sometime more) data rates
and much less hardware.

I have tried the following:
- Increased min_num_slots to 65534 for PF_RING
- Tinkered with TCP timeout settings
- Adjusted reassembly memcap

Kindly take a look at the details I have listed below and let me know if
there is anything you can suggest. I am curious if I am just plain at the
limit of my hardware and need to consider upgrading and/or getting PF_RING
with DNA. Or, perhaps there are a few more items I should consider within
the application itself.

One final thing to consider, would tcp sequence randomization significantly
impact things? I would need to get in touch with the folks responsible to
see if we have this on but thought I would ask here as well!

Many thanks in advance for your time looking at this!

== Profile ==

CentOS 6.5 Linux
Kernel 2.6.32-431.11.2.el6.x86_64

Installed Suricata 2.0 and PF_RING 6.0.1 from source.

Machine sees ~400MB/s at peek load.

== Tuning ==

I've tuned the ixgbe NIC with the following settings...

ethtool -K p4p2 tso off
ethtool -K p4p2 gro off
ethtool -K p4p2 lro off
ethtool -K p4p2 gso off
ethtool -K p4p2 rx off
ethtool -K p4p2 tx off
ethtool -K p4p2 sg off
ethtool -K p4p2 rxvlan off
ethtool -K p4p2 txvlan off
ethtool -N p4p2 rx-flow-hash udp4 sdfn
ethtool -N p4p2 rx-flow-hash udp6 sdfn
ethtool -n p4p2 rx-flow-hash udp6
ethtool -n p4p2 rx-flow-hash udp4
ethtool -C p4p2 rx-usecs 1000
ethtool -C p4p2 adaptive-rx off

It is also using the latest driver available. I have also tried to optimize
things in the sysctl.conf

# -- 10gbe tuning from Intel ixgb driver README -- #

# turn off selective ACK and timestamps
net.ipv4.tcp_sack = 0
net.ipv4.tcp_timestamps = 0

# memory allocation min/pressure/max.
# read buffer, write buffer, and buffer space
net.ipv4.tcp_rmem = 10000000 10000000 10000000
net.ipv4.tcp_wmem = 10000000 10000000 10000000
net.ipv4.tcp_mem = 10000000 10000000 10000000

net.core.rmem_max = 524287
net.core.wmem_max = 524287
net.core.rmem_default = 524287
net.core.wmem_default = 524287
net.core.optmem_max = 524287
net.core.netdev_max_backlog = 300000

== Hardware Specs ==
CPU: Intel Xeon CPU @ 2.40Ghz x 32
RAM: 48G
NIC:
  *-network:1
       description: Ethernet interface
       product: Ethernet 10G 2P X520 Adapter
       vendor: Intel Corporation
       physical id: 0.1
       bus info: pci at 0000:42:00.1
       logical name: p4p2
       version: 01
       serial: a0:36:9f:07:ec:02
       capacity: 1GB/s
       width: 64 bits
       clock: 33MHz
       capabilities: pm msi msix pciexpress vpd bus_master cap_list rom
ethernet physical fibre 1000bt-fd autonegotiation
       configuration: autonegotiation=on broadcast=yes driver=ixgbe
driverversion=3.21.2 duplex=full firmware=0x8000030d latency=0 link=yes
multicast=yes port=fibre promiscuous=yes
       resources: irq:76 memory:d0f00000-d0f7ffff ioport:7ce0(size=32)
memory:d0ffc000-d0ffffff memory:d1100000-d117ffff(prefetchable)
memory:d1380000-d147ffff(prefetchable)
memory:d1480000-d157ffff(prefetchable)

== Suricata Config ==
Below are some details that may be relevant...

runmode: workers

host-mode: sniffer-only

default-packet-size: 9000

- file-store:
    enabled: yes       # set to yes to enable
    log-dir: files    # directory to store the files
    force-magic: yes   # force logging magic on all stored files
    force-md5: yes     # force logging of md5 checksums
    waldo: file.waldo # waldo file to store the file_id across runs

defrag:
  memcap: 512mb
  hash-size: 65536
  trackers: 65535  # number of defragmented flows to follow
  max-frags: 65535 # number of fragments to keep (higher than trackers)
  prealloc: yes
  timeout: 30

flow:
  memcap: 1gb
  hash-size: 1048576
  prealloc: 1048576
  emergency-recovery: 30

flow-timeouts:
  default:
    new: 1
    established: 5
    closed: 0
    emergency-new: 1
    emergency-established: 1
    emergency-closed: 0
  tcp:
    new: 15
    established: 100
    closed: 5
    emergency-new: 1
    emergency-established: 1
    emergency-closed: 0
  udp:
    new: 5
    established: 10
    emergency-new: 1
    emergency-established: 1
  icmp:
    new: 1
    established: 5
    emergency-new: 1
    emergency-established: 1

stream:
  memcap: 10gb
  checksum-validation: no        # reject wrong csums
  prealloc-sesions: 500000
  midstream: false
  asyn-oneside: false
  inline: no                     # auto will use inline mode in IPS mode,
yes or no set it statically
  reassembly:
    memcap: 1.5gb
    depth: 5mb
    toserver-chunk-size: 2560
    toclient-chunk-size: 2560
    randomize-chunk-size: yes

host:
  hash-size: 4096
  prealloc: 1000
  memcap: 16777216


pfring:
  - interface: p4p2
    threads: 16
    cluster-id: 99
 cluster-type: cluster_flow
 checksum-checks: no
 - interface: default

http:
   enabled: yes
   libhtp:
      default-config:
        personality: IDS

        # Can be specified in kb, mb, gb.  Just a number indicates
        # it's in bytes.
        request-body-limit: 12mb
        response-body-limit: 12mb

== ~12 hours (above config) =

top - 14:58:59 up 18:23,  3 users,  load average: 6.44, 4.83, 4.32
Tasks: 664 total,   1 running, 663 sleeping,   0 stopped,   0 zombie
Cpu(s): 17.9%us,  0.1%sy,  0.0%ni, 80.3%id,  0.0%wa,  0.0%hi,  1.7%si,
0.0%st
Mem:  49376004k total, 29289768k used, 20086236k free,    68340k buffers
Swap:  2621432k total,        0k used,  2621432k free,   820172k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
17616 root      20   0 27.0g  26g  16g S 621.4 55.8   3532:51 Suricata-Main

watch 'cat /proc/net/pf_ring/*p4p2* | egrep "Num Free Slots|Tot Packets|Tot
Pkt Lost"'
; First three threads...
Tot Packets        : 627370957
Tot Pkt Lost       : 3582014
Num Free Slots     : 118705
Tot Packets        : 676753767
Tot Pkt Lost       : 5292092
Num Free Slots     : 118745
Tot Packets        : 665348839
Tot Pkt Lost       : 3841911
Num Free Slots     : 118677
...

watch -n 10 'cat stats.log | egrep "reassembly_memuse|segment_memcap_drop"
| tail -n 32'
; First three threads...
tcp.segment_memcap_drop   | RxPFRp4p21                | 25782329
tcp.reassembly_memuse     | RxPFRp4p21                | 1610612705
tcp.segment_memcap_drop   | RxPFRp4p22                | 26161478
tcp.reassembly_memuse     | RxPFRp4p22                | 1610612705
tcp.segment_memcap_drop   | RxPFRp4p23                | 25813867
tcp.reassembly_memuse     | RxPFRp4p23                | 1610612705

grep 'reassembly_gap' stats.log | tail -n 10
tcp.reassembly_gap        | RxPFRp4p27                | 777366
tcp.reassembly_gap        | RxPFRp4p28                | 774896
tcp.reassembly_gap        | RxPFRp4p29                | 781761
tcp.reassembly_gap        | RxPFRp4p210               | 776427
tcp.reassembly_gap        | RxPFRp4p211               | 778734
tcp.reassembly_gap        | RxPFRp4p212               | 773203
tcp.reassembly_gap        | RxPFRp4p213               | 781125
tcp.reassembly_gap        | RxPFRp4p214               | 776043
tcp.reassembly_gap        | RxPFRp4p215               | 781790
tcp.reassembly_gap        | RxPFRp4p216               | 783368

== PF RING ==

PF_RING Version          : 6.0.1 ($Revision: exported$)
Total rings              : 16

Standard (non DNA) Options
Ring slots               : 65534
Slot version             : 15
Capture TX               : No [RX only]
IP Defragment            : No
Socket Mode              : Standard
Transparent mode         : Yes [mode 0]
Total plugins            : 0
Cluster Fragment Queue   : 9175
Cluster Fragment Discard : 597999

== ~30 min (with changes) ==

FWIW, when I increase reassembly memcap and time outs to the following...

flow-timeouts:
  default:
    new: 5
    established: 50
    closed: 0
    emergency-new: 1
    emergency-established: 1
    emergency-closed: 0
  tcp:
    new: 15
    established: 100
    closed: 10
    emergency-new: 1
    emergency-established: 1
    emergency-closed: 0
  udp:
    new: 5
    established: 50
    emergency-new: 1
    emergency-established: 1
  icmp:
    new: 1
    established: 5
    emergency-new: 1
    emergency-established: 1

reassembly:
  memcap: 3gb
  depth: 5mb

These are the results, note how there are no more free slots for PF_RING. I
believe this results in increased packet loss... which is likely resulting
in my truncated files that I receive when I pull a PDF.

watch 'cat /proc/net/pf_ring/*p4p2* | egrep "Num Free Slots|Tot Packets|Tot
Pkt Lost"'
; First three threads...
Tot Packets        : 80281541
Tot Pkt Lost       : 44290194
Num Free Slots     : 0
Tot Packets        : 81926241
Tot Pkt Lost       : 17412402
Num Free Slots     : 0
Tot Packets        : 80108557
Tot Pkt Lost       : 14667061
Num Free Slots     : 0

watch -n 10 'cat stats.log | egrep "reassembly_memuse|segment_memcap_drop"
| tail -n 32'
; First three threads...
tcp.segment_memcap_drop   | RxPFRp4p21                | 0
tcp.reassembly_memuse     | RxPFRp4p21                | 1681598708
tcp.segment_memcap_drop   | RxPFRp4p22                | 0
tcp.reassembly_memuse     | RxPFRp4p22                | 1681626644
tcp.segment_memcap_drop   | RxPFRp4p23                | 0
tcp.reassembly_memuse     | RxPFRp4p23                | 1681597556
tcp.segment_memcap_drop   | RxPFRp4p24                | 0
*** Important to note here, the reassembly memuse seems to steadily
increase overtime. After a few minutes of putting this in it has risen to
2022140776 across. Makes me think things are not offloading quickly...
(timeout/depth issue?)

grep 'reassembly_gap' stats.log | tail -n 10
tcp.reassembly_gap        | RxPFRp4p27                | 27603
tcp.reassembly_gap        | RxPFRp4p28                | 26677
tcp.reassembly_gap        | RxPFRp4p29                | 26869
tcp.reassembly_gap        | RxPFRp4p210               | 25031
tcp.reassembly_gap        | RxPFRp4p211               | 23988
tcp.reassembly_gap        | RxPFRp4p212               | 23809
tcp.reassembly_gap        | RxPFRp4p213               | 26420
tcp.reassembly_gap        | RxPFRp4p214               | 25271
tcp.reassembly_gap        | RxPFRp4p215               | 26285
tcp.reassembly_gap        | RxPFRp4p216               | 26848
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20140530/51c63952/attachment-0001.html>


More information about the Oisf-users mailing list