[Oisf-users] Questions about stats and packet drops

Cooper F. Nelson cnelson at ucsd.edu
Wed Jan 7 15:52:30 UTC 2015

Hash: SHA1

On 1/7/2015 5:44 AM, Jose Vila wrote:
> Thanks Cooper for your reply.
> I've added more cores, reducing the drop rate below 2%. Can't add BPF
> filters as the network is heterogeneous and I want to catch as much
> traffic as possible, despite its src/dst port (I have detected some
> webservices in weird ports).
> I still have the same questions I posted in my first mail:
> * What does exactly "tcp.reassembly_memuse" mean and in which units it
> is measured? If it's measured in bytes I'm getting more than 18 Exabytes
> of memory usage !!!

How much memory suricata is using for reassembling TCP streams.  It
should be in bytes, I'm seeing number's like this:

tcp.reassembly_memuse     | AFPacketeth21             | 824201958
tcp.reassembly_memuse     | AFPacketeth22             | 824201958

...which I believe is in bytes.

> * I believe "tcp.segment_memcap_drop" means packets received by suricata
> (thus counted in "capture.kernel_packets") but couldn't get to the
> (stream or reassembly?) processor for further treatment. Which processor
> is the right one? How can I reduce its value?

I think that is right, my counters are all zeroes.  I suspect if you are
having drops here use should use larger ring and socket buffers for your
worker threads.  Here are my settings:

>     ring-size: 500000
>     buffer-size: 1048576

> * I believe "tcp.stream_depth_reached" gets incremented each time the
> "stream.reassembly.depth" is reached, but no packets are dropped here,
> they are passed to other processors for further inspection without being
> reassembled. Is this right?

Nope.  "tcp.stream_depth_reached" means you have tracked a stream to the
maximum depth and are not processing any further packets.  This is
defined by the 'depth:' directive under "stream:".

> * What does exactly "tcp.reassembly_gap" mean?

I think it means that TCP segments were missing when reassembling a
stream, but I may be wrong about that.

> Thank you very much,
> Regards,
> Jose Vila.
> On Sun, Jan 4, 2015 at 4:57 PM, Cooper F. Nelson <cnelson at ucsd.edu
> <mailto:cnelson at ucsd.edu>> wrote:
> Couple things you could try.
> 1.  Use all available cores (12 workers threads).
> 2.  Use a bpf filter to only monitor ports 80 and 53
> On 12/24/2014 12:37 AM, Jose Vila wrote:
>> Hi,
>> I'm playing around with Suricata, and want to reduce the number of drops.
>> I have 1000Mbits/s traffic and a server with 12 cores and 12GB of RAM.
>> The objective of this sensor is to get HTTP and DNS logging and it only
>> has a bunch of very simple rules for file extraction.
>> I'm using PF_RING, and recently switched to "workers" runmode, which
>> reduced my packer drop rate (capture.kernel_drop statistic) to around
>> 5-6% with 6 worker threads.
>> My memcaps are:
>> defrag.memcap = 32mb
>> flow.memcap = 256mb
>> stream.memcap = 7gb
>> stream.reassembly.memcap = 3gb
>> stream.reassembly.depth = 8mb

- -- 
Cooper Nelson
Network Security Analyst
UCSD ACT Security Team
cnelson at ucsd.edu x41042
Version: GnuPG v2.0.17 (MingW32)


More information about the Oisf-users mailing list