<div dir="ltr"><div><div>Would certainly appear that we have teredo running on the network:<br><br>[1:2200077:1] SURICATA TCPv6 invalid checksum [**] [Classification: (null)] [Priority: 3] {TCP} 2001:0000:9d38:90d7:3494:2189:d8d0:fc92:47288 -> 2001:0000:9d38:6abd:143c:2073:7505:fb5d:60002<br><br></div>In terms of what might be classed as invalid packets I'm seeing some TCP header length too small, TCP invalid option length, TCP option invalid length, UDP packet too small. Continuing to dig into things.<br><br></div>Luke<br></div><div class="gmail_extra"><br><div class="gmail_quote">On 28 January 2016 at 14:48, Luke Whitworth <span dir="ltr"><<a href="mailto:l.a.whitworth@gmail.com" target="_blank">l.a.whitworth@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div>Hi Victor,<br></div><br>Cheers for the response. Due to the relatively open nature of the network in question (University) I can't unequivocally say there's no teredo on the network. I'll see if I can use our Argus logs to confirm or deny what suricata is seeing.<br><br></div>I'll up the DNS memcap and turn on the decoder-events and see what we can see.<span class="HOEnZb"><font color="#888888"><br><br></font></span></div><span class="HOEnZb"><font color="#888888">Luke<br></font></span></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On 27 January 2016 at 11:54, Victor Julien <span dir="ltr"><<a href="mailto:lists@inliniac.net" target="_blank">lists@inliniac.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span>On 26-01-16 12:33, Luke Whitworth wrote:<br>
> -------------------------------------------------------------------<br>
> Counter | TM Name | Value<br>
> -------------------------------------------------------------------<br>
> capture.kernel_packets | Total | 682560755<br>
> capture.kernel_drops | Total | 58551<br>
> decoder.pkts | Total | 682631014<br>
> decoder.bytes | Total | 560340398074<br>
> decoder.invalid | Total | 16<br>
<br>
</span>It could be interesting to figure out why there are 'invalid' packets.<br>
To do so, temporary enable the decoder-events.rules.<br>
<span><br>
<br>
> decoder.ipv4 | Total | 682297726<br>
> decoder.ipv6 | Total | 2707083<br>
> decoder.ethernet | Total | 682631014<br>
> decoder.raw | Total | 0<br>
> decoder.null | Total | 0<br>
> decoder.sll | Total | 0<br>
> decoder.tcp | Total | 615528819<br>
> decoder.udp | Total | 66707900<br>
> decoder.sctp | Total | 0<br>
> decoder.icmpv4 | Total | 340969<br>
> decoder.icmpv6 | Total | 40166<br>
> decoder.ppp | Total | 1847<br>
> decoder.pppoe | Total | 0<br>
> decoder.gre | Total | 1847<br>
> decoder.vlan | Total | 0<br>
> decoder.vlan_qinq | Total | 0<br>
> decoder.teredo | Total | 1402773<br>
<br>
</span>Eric, I think we need to have a toggle to disable this decoder. I've<br>
seen it false positive quite a few times already.<br>
<br>
Luke, do you have teredo on your network? The parser 'guesses', but is<br>
known to FP in some cases.<br>
<span><br>
<br>
> decoder.ipv4_in_ipv6 | Total | 0<br>
> decoder.ipv6_in_ipv6 | Total | 0<br>
> decoder.mpls | Total | 0<br>
> decoder.avg_pkt_size | Total | 820<br>
> decoder.max_pkt_size | Total | 1514<br>
> decoder.erspan | Total | 0<br>
> flow.memcap | Total | 0<br>
> defrag.ipv4.fragments | Total | 11194<br>
> defrag.ipv4.reassembled | Total | 5420<br>
> defrag.ipv4.timeouts | Total | 0<br>
> defrag.ipv6.fragments | Total | 663<br>
> defrag.ipv6.reassembled | Total | 314<br>
> defrag.ipv6.timeouts | Total | 0<br>
> defrag.max_frag_hits | Total | 0<br>
> tcp.sessions | Total | 3176947<br>
> tcp.ssn_memcap_drop | Total | 0<br>
> tcp.pseudo | Total | 880612<br>
> tcp.pseudo_failed | Total | 0<br>
> tcp.invalid_checksum | Total | 3089<br>
<br>
</span>Might be interesting to figure out why the bad csums? Again,<br>
decoder-events.rules contains rules to alert on this so that you can<br>
inspect whats happening.<br>
<span><br>
<br>
> tcp.no_flow | Total | 0<br>
> tcp.syn | Total | 3678418<br>
> tcp.synack | Total | 3391449<br>
> tcp.rst | Total | 2451453<br>
> tcp.segment_memcap_drop | Total | 0<br>
> tcp.stream_depth_reached | Total | 22919<br>
> tcp.reassembly_gap | Total | 316061<br>
> detect.alert | Total | 33<br>
> flow_mgr.closed_pruned | Total | 2579970<br>
> flow_mgr.new_pruned | Total | 912247<br>
> flow_mgr.est_pruned | Total | 2167370<br>
> flow.spare | Total | 50481<br>
> flow.emerg_mode_entered | Total | 0<br>
> flow.emerg_mode_over | Total | 0<br>
> flow.tcp_reuse | Total | 98282<br>
> tcp.memuse | Total | 33052864<br>
> tcp.reassembly_memuse | Total | <a href="tel:2067435230" value="+12067435230" target="_blank">2067435230</a><br>
> dns.memuse | Total | 16785488<br>
> dns.memcap_state | Total | 0<br>
> dns.memcap_global | Total | 2703798<br>
<br>
</span>Looks like the DNS memcap is hit quite often. You could consider upping it.<br>
<div><div><br>
--<br>
---------------------------------------------<br>
Victor Julien<br>
<a href="http://www.inliniac.net/" rel="noreferrer" target="_blank">http://www.inliniac.net/</a><br>
PGP: <a href="http://www.inliniac.net/victorjulien.asc" rel="noreferrer" target="_blank">http://www.inliniac.net/victorjulien.asc</a><br>
---------------------------------------------<br>
<br>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>