[Oisf-users] tcp.reassembly_gap

Luke Whitworth l.a.whitworth at gmail.com
Thu Jan 28 14:48:14 UTC 2016


Hi Victor,

Cheers for the response.  Due to the relatively open nature of the network
in question (University) I can't unequivocally say there's no teredo on the
network.  I'll see if I can use our Argus logs to confirm or deny what
suricata is seeing.

I'll up the DNS memcap and turn on the decoder-events and see what we can
see.

Luke

On 27 January 2016 at 11:54, Victor Julien <lists at inliniac.net> wrote:

> On 26-01-16 12:33, Luke Whitworth wrote:
> > -------------------------------------------------------------------
> > Counter                   | TM Name                   | Value
> > -------------------------------------------------------------------
> > capture.kernel_packets    | Total                     | 682560755
> > capture.kernel_drops      | Total                     | 58551
> > decoder.pkts              | Total                     | 682631014
> > decoder.bytes             | Total                     | 560340398074
> > decoder.invalid           | Total                     | 16
>
> It could be interesting to figure out why there are 'invalid' packets.
> To do so, temporary enable the decoder-events.rules.
>
>
> > decoder.ipv4              | Total                     | 682297726
> > decoder.ipv6              | Total                     | 2707083
> > decoder.ethernet          | Total                     | 682631014
> > decoder.raw               | Total                     | 0
> > decoder.null              | Total                     | 0
> > decoder.sll               | Total                     | 0
> > decoder.tcp               | Total                     | 615528819
> > decoder.udp               | Total                     | 66707900
> > decoder.sctp              | Total                     | 0
> > decoder.icmpv4            | Total                     | 340969
> > decoder.icmpv6            | Total                     | 40166
> > decoder.ppp               | Total                     | 1847
> > decoder.pppoe             | Total                     | 0
> > decoder.gre               | Total                     | 1847
> > decoder.vlan              | Total                     | 0
> > decoder.vlan_qinq         | Total                     | 0
> > decoder.teredo            | Total                     | 1402773
>
> Eric, I think we need to have a toggle to disable this decoder. I've
> seen it false positive quite a few times already.
>
> Luke, do you have teredo on your network? The parser 'guesses', but is
> known to FP in some cases.
>
>
> > decoder.ipv4_in_ipv6      | Total                     | 0
> > decoder.ipv6_in_ipv6      | Total                     | 0
> > decoder.mpls              | Total                     | 0
> > decoder.avg_pkt_size      | Total                     | 820
> > decoder.max_pkt_size      | Total                     | 1514
> > decoder.erspan            | Total                     | 0
> > flow.memcap               | Total                     | 0
> > defrag.ipv4.fragments     | Total                     | 11194
> > defrag.ipv4.reassembled   | Total                     | 5420
> > defrag.ipv4.timeouts      | Total                     | 0
> > defrag.ipv6.fragments     | Total                     | 663
> > defrag.ipv6.reassembled   | Total                     | 314
> > defrag.ipv6.timeouts      | Total                     | 0
> > defrag.max_frag_hits      | Total                     | 0
> > tcp.sessions              | Total                     | 3176947
> > tcp.ssn_memcap_drop       | Total                     | 0
> > tcp.pseudo                | Total                     | 880612
> > tcp.pseudo_failed         | Total                     | 0
> > tcp.invalid_checksum      | Total                     | 3089
>
> Might be interesting to figure out why the bad csums? Again,
> decoder-events.rules contains rules to alert on this so that you can
> inspect whats happening.
>
>
> > tcp.no_flow               | Total                     | 0
> > tcp.syn                   | Total                     | 3678418
> > tcp.synack                | Total                     | 3391449
> > tcp.rst                   | Total                     | 2451453
> > tcp.segment_memcap_drop   | Total                     | 0
> > tcp.stream_depth_reached  | Total                     | 22919
> > tcp.reassembly_gap        | Total                     | 316061
> > detect.alert              | Total                     | 33
> > flow_mgr.closed_pruned    | Total                     | 2579970
> > flow_mgr.new_pruned       | Total                     | 912247
> > flow_mgr.est_pruned       | Total                     | 2167370
> > flow.spare                | Total                     | 50481
> > flow.emerg_mode_entered   | Total                     | 0
> > flow.emerg_mode_over      | Total                     | 0
> > flow.tcp_reuse            | Total                     | 98282
> > tcp.memuse                | Total                     | 33052864
> > tcp.reassembly_memuse     | Total                     | 2067435230
> > dns.memuse                | Total                     | 16785488
> > dns.memcap_state          | Total                     | 0
> > dns.memcap_global         | Total                     | 2703798
>
> Looks like the DNS memcap is hit quite often. You could consider upping it.
>
> --
> ---------------------------------------------
> Victor Julien
> http://www.inliniac.net/
> PGP: http://www.inliniac.net/victorjulien.asc
> ---------------------------------------------
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20160128/40bfa1aa/attachment-0002.html>


More information about the Oisf-users mailing list