<div dir="ltr">
<div class="" style="font-family:-moz-fixed;font-size:12px" lang="x-unicode">Hi all,
<br>
<br>I'm pretty new to Suricata so please forgive me if I'm about to ask 
stupid questions!
<br>
<br>I'm hoping to replace some Snort monitors with Suricata.  I've got 
Suricata setup in workers mode using pf_ring:
<br>
<br>pfring:
<br>  - interface: eth0;eth1
<br>    threads: 24
<br>    cluster-id: 39
<br>    cluster-type: cluster_flow
<br>
<br>I'm seeing what I'd hope to see, no kernel drops:
<br>
<br>-------------------------------------------------------------------
<br>Date: 1/20/2016 -- 16:08:16 (uptime: 0d, 00h 19m 58s)
<br>-------------------------------------------------------------------
<br>Counter                   | TM Name                   | Value
<br>-------------------------------------------------------------------
<br>capture.kernel_packets    | RxPFReth0;eth11           | 27716639
<br>capture.kernel_drops      | RxPFReth0;eth11           | 0
<br>
<br>However occasionally the Snort monitor (running on the same box) will 
still see events that Suricata doesn't, which at the minute I'm putting 
down to:
<br>
<br>tcp.reassembly_gap        | RxPFReth0;eth11           | 408
<br>tcp.reassembly_gap        | RxPFReth0;eth12           | 160
<br>tcp.reassembly_gap        | RxPFReth0;eth13           | 26
<br>tcp.reassembly_gap        | RxPFReth0;eth14           | 32
<br>tcp.reassembly_gap        | RxPFReth0;eth15           | 35
<br>tcp.reassembly_gap        | RxPFReth0;eth16           | 27
<br>tcp.reassembly_gap        | RxPFReth0;eth17           | 18
<br>tcp.reassembly_gap        | RxPFReth0;eth18           | 31
<br>tcp.reassembly_gap        | RxPFReth0;eth19           | 23
<br>
<br>Granted I might be getting the wrong end of the stick here (I'm also 
unsure why I only see 9 in my stats.log when there's 24 threads running: 
20/1/2016 -- 15:48:36 - <Notice> - all 24 packet processing threads, 3 
management threads initialized, engine started.).
<br>
<br>Other (hopefully) relevant parts of Suricata.yml:
<br>
<br>flow:
<br>  # memcap: 128mb
<br>  memcap: 8gb
<br>  # hash-size: 65536
<br>  hash-size: 131072
<br>  # prealloc: 10000
<br>  prealloc: 50000
<br>  emergency-recovery: 30
<br>
<br>stream:
<br>  memcap: 12gb
<br>  prealloc-sessions: 2500000  # Added by LW
<br>  # checksum-validation: yes      # reject wrong csums
<br>  checksum-validation: no      # reject wrong csums
<br>  inline: auto                  # auto will use inline mode in IPS 
mode, yes or no set it statically
<br>  reassembly:
<br>    #memcap: 128mb
<br>    memcap: 24gb
<br>    depth: 6mb                  # reassemble 1mb into a stream
<br>    toserver-chunk-size: 2560
<br>    toclient-chunk-size: 2560
<br>    randomize-chunk-size: yes
<br>
<br>The box has 32GB of RAM and a couple of Intel® Xeon® Processor X5690 
CPUs (Hex core, dual thread, hence the 24 threads for pf_ring).
<br>
<br>Am I looking along the right lines?  Am I expecting the impossible for 
tcp.reassembly_gap to be 0?
<br>
<br>Cheers,
<br>
<br>Luke<br></div>

</div>