[Oisf-users] Clarification on dropped packet counters

Will Metcalf william.metcalf at gmail.com
Wed Aug 10 19:36:14 UTC 2011


> Will, I have a question. Is the number in dropped packets registered wher
> Suricata stops is independent of the number of packets drops by memcap_drops
> in stats.log?

Yes

On Wed, Aug 10, 2011 at 2:34 PM, Fernando Ortiz
<fernando.ortiz.f at gmail.com> wrote:
> Will, I have a question. Is the number in dropped packets registered wher
> Suricata stops is independent of the number of packets drops by memcap_drops
> in stats.log?
>
> Cheers,
> Fernando
>
> 2011/8/10 Will Metcalf <william.metcalf at gmail.com>
>>
>> Gene,
>>
>> Well there is an upside and a down side... upside is with this patch
>> you should get accurate drop counters in pcap mode. downside is that
>> the logic was wrong so you will more than likely see an increase in
>> drops. You can try fiddling with --pcap-buffer-size option (pcap
>> buffer size in bytes), also try increasing the max-pending-packets
>> option in the suricata.yaml from 50 to 500 or maybe 5000.
>>
>> with regards to tcp.segment_memcap_drop counter, you have two choices,
>> decrease the amount of data you are performing reassembly on by
>> turning down depth, or by turning up the reassembly memcap in the
>> section of the suricata.yaml displayed below.
>>
>>
>> stream:
>>  memcap: 33554432              # 32mb
>>  checksum_validation: yes      # reject wrong csums
>>  inline: no                    # no inline mode
>>  reassembly:
>>    memcap: 67108864            # 64mb for reassembly
>>    depth: 1048576              # reassemble 1mb into a stream
>>    toserver_chunk_size: 2560
>>    toclient_chunk_size: 2560
>>
>> Regards,
>>
>> Will
>>
>> On Tue, Aug 9, 2011 at 10:15 PM, Will Metcalf <william.metcalf at gmail.com>
>> wrote:
>> > Hmmmm.... it looks like once upon a time I screwed pcap stats
>> > calculation... Patch on the way...
>> >
>> > Regards,
>> >
>> > Will
>> >
>> > On Tue, Aug 9, 2011 at 8:37 PM, Gene Albin <gene.albin at gmail.com> wrote:
>> >> So it turns out that my CentOS 5.6 server with the default kernel
>> >> network
>> >> settings is not optimal for an IDS connected to a high speed network.
>> >>  One
>> >> of my problems was that the kernel couldn't keep up with the flow of
>> >> traffic.  So I made the following changes to my kernel:
>> >>
>> >> sysctl -w net.core.netdev_max_backlog=10000
>> >>
>> >> sysctl -w net.core.rmem_devault=16777216
>> >>
>> >> sysctl -w net.core.rmem_max=33554432
>> >>
>> >> sysctl -w net.ipv4.tcp_mem=’194688 259584 389376’
>> >>
>> >> sysctl -w net.ipv4.tcp_rmem=’1048576 4194304 33554432’
>> >>
>> >> sysctl -w net.ipv4.tcp_no_metrics_save=1
>> >>
>> >> Now when I run tcpdump I get 0 dropped packets after several minutes,
>> >> and
>> >> after running Suricata for about 15 minutes my suricata.log drops were
>> >> down
>> >> to 3.9%. Much better than the 27% I had been seeing.
>> >> Further, looking at the stats.log file my tcp.ssn_memcap_drop number is
>> >> at 0
>> >> for the same run.  Unfortunately the tcp.segment_memcap_drop number is
>> >> still
>> >> high at 2938343 (out of 14754737 packets)
>> >> So even though I've minimized my drops, I'm still uncertain about the
>> >> metrics listed in my original post.
>> >> Gene
>> >> On Tue, Aug 9, 2011 at 2:38 PM, Gene Albin <gene.albin at gmail.com>
>> >> wrote:
>> >>>
>> >>> I'm trying to make sense out of the various packet metrics in the
>> >>> suricata.log and stats.log files.  Can anyone shed light on what
>> >>> specifically each of these counters is measuring?
>> >>> suricata.log:
>> >>> [4947] 8/8/2011 -- 14:50:31 - (source-pcap.c:561) <Info>
>> >>> (ReceivePcapThreadExitStats) -- (ReceivePcap) Packets 238097983, bytes
>> >>> 182382168249
>> >>> [4947] 8/8/2011 -- 14:50:31 - (source-pcap.c:569) <Info>
>> >>> (ReceivePcapThreadExitStats) -- (ReceivePcap) Pcap Total:539841804
>> >>> Recv:388969943 Drop:150871861 (27.9%).
>> >>> Looking at these two lines from suricata.log it looks like the pcap
>> >>> engine
>> >>> received a total of 238 million packets AND 388 million packets.
>> >>>  Also,
>> >>> notice how the difference between 539M and 388M is 150M AND the
>> >>> difference
>> >>> between 388M and 238M is also 150M.  I checked another set of
>> >>> suricata.log
>> >>> and stats.log files I have and found that this relationship between
>> >>> Recv and
>> >>> Drop, and Packets and Drop appears the be the same in that file.
>> >>> What specifically are each of these metrics measuring and from where
>> >>> are
>> >>> the measurements taken (nic, pcap, suricata)?
>> >>> What is the relationship between these numbers?
>> >>> Stats.log:
>> >>> decoder.pkts              | Decode & Stream   | 238097982
>> >>> tcp.ssn_memcap_drop       | Decode & Stream   | 299435
>> >>> tcp.segment_memcap_drop   | Decode & Stream   | 31445861
>> >>> In stats.log the decoder.pkts line matches up with the (ReceivePcap)
>> >>> Packets line in the suricata.log file.  What about these memcap drop
>> >>> lines?
>> >>>  They don't seem to match up with the drop counter in suricata.log
>> >>> leading
>> >>> me to believe that these are packets dropped by Suricata and are
>> >>> independent
>> >>> of the ones in the suricata.log file.
>> >>> Sure would appreciate any insight into the differences between these
>> >>> metrics.  I'm just a bit confused.
>> >>> Thanks,
>> >>> --
>> >>> Gene Albin
>> >>> gene.albin at gmail.com
>> >>>
>> >>
>> >>
>> >>
>> >> --
>> >> Gene Albin
>> >> gene.albin at gmail.com
>> >>
>> >>
>> >> _______________________________________________
>> >> Oisf-users mailing list
>> >> Oisf-users at openinfosecfoundation.org
>> >> http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>> >>
>> >>
>> >
>>
>> _______________________________________________
>> Oisf-users mailing list
>> Oisf-users at openinfosecfoundation.org
>> http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>>
>
>
>
> --
> Fernando Ortiz
> Twitter: http://twitter.com/FernandOrtizF
>
>



More information about the Oisf-users mailing list