[Oisf-users] tcp.reassembly_gap

Luke Whitworth l.a.whitworth at gmail.com
Fri Jan 22 09:13:06 UTC 2016


 Hi all,

I'm pretty new to Suricata so please forgive me if I'm about to ask stupid
questions!

I'm hoping to replace some Snort monitors with Suricata.  I've got Suricata
setup in workers mode using pf_ring:

pfring:
  - interface: eth0;eth1
    threads: 24
    cluster-id: 39
    cluster-type: cluster_flow

I'm seeing what I'd hope to see, no kernel drops:

-------------------------------------------------------------------
Date: 1/20/2016 -- 16:08:16 (uptime: 0d, 00h 19m 58s)
-------------------------------------------------------------------
Counter                   | TM Name                   | Value
-------------------------------------------------------------------
capture.kernel_packets    | RxPFReth0;eth11           | 27716639
capture.kernel_drops      | RxPFReth0;eth11           | 0

However occasionally the Snort monitor (running on the same box) will still
see events that Suricata doesn't, which at the minute I'm putting down to:

tcp.reassembly_gap        | RxPFReth0;eth11           | 408
tcp.reassembly_gap        | RxPFReth0;eth12           | 160
tcp.reassembly_gap        | RxPFReth0;eth13           | 26
tcp.reassembly_gap        | RxPFReth0;eth14           | 32
tcp.reassembly_gap        | RxPFReth0;eth15           | 35
tcp.reassembly_gap        | RxPFReth0;eth16           | 27
tcp.reassembly_gap        | RxPFReth0;eth17           | 18
tcp.reassembly_gap        | RxPFReth0;eth18           | 31
tcp.reassembly_gap        | RxPFReth0;eth19           | 23

Granted I might be getting the wrong end of the stick here (I'm also unsure
why I only see 9 in my stats.log when there's 24 threads running: 20/1/2016
-- 15:48:36 - <Notice> - all 24 packet processing threads, 3 management
threads initialized, engine started.).

Other (hopefully) relevant parts of Suricata.yml:

flow:
  # memcap: 128mb
  memcap: 8gb
  # hash-size: 65536
  hash-size: 131072
  # prealloc: 10000
  prealloc: 50000
  emergency-recovery: 30

stream:
  memcap: 12gb
  prealloc-sessions: 2500000  # Added by LW
  # checksum-validation: yes      # reject wrong csums
  checksum-validation: no      # reject wrong csums
  inline: auto                  # auto will use inline mode in IPS mode,
yes or no set it statically
  reassembly:
    #memcap: 128mb
    memcap: 24gb
    depth: 6mb                  # reassemble 1mb into a stream
    toserver-chunk-size: 2560
    toclient-chunk-size: 2560
    randomize-chunk-size: yes

The box has 32GB of RAM and a couple of Intel® Xeon® Processor X5690 CPUs
(Hex core, dual thread, hence the 24 threads for pf_ring).

Am I looking along the right lines?  Am I expecting the impossible for
tcp.reassembly_gap to be 0?

Cheers,

Luke
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20160122/f578a421/attachment.html>


More information about the Oisf-users mailing list