<html>
<head>
<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 12pt;
font-family:Calibri
}
--></style></head>
<body class='hmmessage'><div dir='ltr'>These were over short period of time. Here are stats which are almost after one day of running.<div>I monitor two span ports, once has 10% packet loss, second one is 3%.</div><div>Same setup on 2.0.7 gives me packet loss of 0.05% for each span port monitored.</div><div><br></div><div><div>capture.kernel_packets    | RxPFReth020               | 396154912</div><div>capture.kernel_drops      | RxPFReth020               | 39804970</div><div>dns.memuse                | RxPFReth020               | 4568927</div><div>dns.memcap_state          | RxPFReth020               | 0</div><div>dns.memcap_global         | RxPFReth020               | 0</div><div>decoder.pkts              | RxPFReth020               | 396154912</div><div>decoder.bytes             | RxPFReth020               | 243772693515</div><div>decoder.invalid           | RxPFReth020               | 34</div><div>decoder.ipv4              | RxPFReth020               | 396115296</div><div>decoder.ipv6              | RxPFReth020               | 42982</div><div>decoder.ethernet          | RxPFReth020               | 396154912</div><div>decoder.raw               | RxPFReth020               | 0</div><div>decoder.sll               | RxPFReth020               | 0</div><div>decoder.tcp               | RxPFReth020               | 352247594</div><div>decoder.udp               | RxPFReth020               | 43360111</div><div>decoder.sctp              | RxPFReth020               | 36</div><div>decoder.icmpv4            | RxPFReth020               | 120913</div><div>decoder.icmpv6            | RxPFReth020               | 24441</div><div>decoder.ppp               | RxPFReth020               | 0</div><div>decoder.pppoe             | RxPFReth020               | 0</div><div>decoder.gre               | RxPFReth020               | 0</div><div>decoder.vlan              | RxPFReth020               | 0</div><div>decoder.vlan_qinq         | RxPFReth020               | 0</div><div>decoder.teredo            | RxPFReth020               | 1063</div><div>decoder.ipv4_in_ipv6      | RxPFReth020               | 0</div><div>decoder.ipv6_in_ipv6      | RxPFReth020               | 0</div><div>decoder.mpls              | RxPFReth020               | 0</div><div>decoder.avg_pkt_size      | RxPFReth020               | 615</div><div>decoder.max_pkt_size      | RxPFReth020               | 1514</div><div>defrag.ipv4.fragments     | RxPFReth020               | 4815</div><div>defrag.ipv4.reassembled   | RxPFReth020               | 2303</div><div>defrag.ipv4.timeouts      | RxPFReth020               | 0</div><div>defrag.ipv6.fragments     | RxPFReth020               | 0</div><div>defrag.ipv6.reassembled   | RxPFReth020               | 0</div><div>defrag.ipv6.timeouts      | RxPFReth020               | 0</div><div>defrag.max_frag_hits      | RxPFReth020               | 0</div><div>tcp.sessions              | RxPFReth020               | 2374258</div><div>tcp.ssn_memcap_drop       | RxPFReth020               | 0</div><div>tcp.pseudo                | RxPFReth020               | 582718</div><div>tcp.pseudo_failed         | RxPFReth020               | 0</div><div>tcp.invalid_checksum      | RxPFReth020               | 0</div><div>tcp.no_flow               | RxPFReth020               | 0</div><div>tcp.reused_ssn            | RxPFReth020               | 505</div><div>tcp.memuse                | RxPFReth020               | 20649552</div><div>tcp.syn                   | RxPFReth020               | 2491251</div><div>tcp.synack                | RxPFReth020               | 1892253</div><div>tcp.rst                   | RxPFReth020               | 1079891</div><div>tcp.segment_memcap_drop   | RxPFReth020               | 0</div><div>tcp.stream_depth_reached  | RxPFReth020               | 6691</div><div>tcp.reassembly_memuse     | RxPFReth020               | 40392320000</div><div>tcp.reassembly_gap        | RxPFReth020               | 46171</div><div>http.memuse               | RxPFReth020               | 865185241</div><div>http.memcap               | RxPFReth020               | 0</div><div>detect.alert              | RxPFReth020               | 9562</div><div>flow_mgr.closed_pruned    | FlowManagerThread         | 206743007</div><div>flow_mgr.new_pruned       | FlowManagerThread         | 28953165</div><div>flow_mgr.est_pruned       | FlowManagerThread         | 38698267</div><div>flow.memuse               | FlowManagerThread         | 5586600240</div><div>flow.spare                | FlowManagerThread         | 16007979</div><div>flow.emerg_mode_entered   | FlowManagerThread         | 0</div><div>flow.emerg_mode_over      | FlowManagerThread         | 0</div><div><br></div><br><div>> Date: Tue, 5 May 2015 23:49:13 +0200<br>> Subject: Re: [Oisf-users] Suricata 2.1beta3 vs 2.0.7<br>> From: petermanev@gmail.com<br>> To: coolyasha@hotmail.com<br>> CC: modversion@gmail.com; oisf-users@lists.openinfosecfoundation.org<br>> <br>> On Tue, May 5, 2015 at 4:26 PM, Yasha Zislin <coolyasha@hotmail.com> wrote:<br>> > Here is an example of one of the threads:<br>> ><br>> > capture.kernel_packets    | RxPFReth220               | 4438207<br>> > capture.kernel_drops      | RxPFReth220               | 466880<br>> > dns.memuse                | RxPFReth220               | 3908544<br>> > dns.memcap_state          | RxPFReth220               | 0<br>> > dns.memcap_global         | RxPFReth220               | 0<br>> > decoder.pkts              | RxPFReth220               | 4438207<br>> > decoder.bytes             | RxPFReth220               | 3216813731<br>> > decoder.invalid           | RxPFReth220               | 0<br>> > decoder.ipv4              | RxPFReth220               | 4438207<br>> > decoder.ipv6              | RxPFReth220               | 38<br>> > decoder.ethernet          | RxPFReth220               | 4438207<br>> > decoder.raw               | RxPFReth220               | 0<br>> > decoder.sll               | RxPFReth220               | 0<br>> > decoder.tcp               | RxPFReth220               | 4229782<br>> > decoder.udp               | RxPFReth220               | 205264<br>> > decoder.sctp              | RxPFReth220               | 0<br>> > decoder.icmpv4            | RxPFReth220               | 3161<br>> > decoder.icmpv6            | RxPFReth220               | 0<br>> > decoder.ppp               | RxPFReth220               | 0<br>> > decoder.pppoe             | RxPFReth220               | 0<br>> > decoder.gre               | RxPFReth220               | 0<br>> > decoder.vlan              | RxPFReth220               | 0<br>> > decoder.vlan_qinq         | RxPFReth220               | 0<br>> > decoder.teredo            | RxPFReth220               | 38<br>> > decoder.ipv4_in_ipv6      | RxPFReth220               | 0<br>> > decoder.ipv6_in_ipv6      | RxPFReth220               | 0<br>> > decoder.mpls              | RxPFReth220               | 0<br>> > decoder.avg_pkt_size      | RxPFReth220               | 724<br>> > decoder.max_pkt_size      | RxPFReth220               | 1514<br>> > defrag.ipv4.fragments     | RxPFReth220               | 0<br>> > defrag.ipv4.reassembled   | RxPFReth220               | 0<br>> > defrag.ipv4.timeouts      | RxPFReth220               | 0<br>> > defrag.ipv6.fragments     | RxPFReth220               | 0<br>> > defrag.ipv6.reassembled   | RxPFReth220               | 0<br>> > defrag.ipv6.timeouts      | RxPFReth220               | 0<br>> > defrag.max_frag_hits      | RxPFReth220               | 0<br>> > tcp.sessions              | RxPFReth220               | 34053<br>> > tcp.ssn_memcap_drop       | RxPFReth220               | 0<br>> > tcp.pseudo                | RxPFReth220               | 11290<br>> > tcp.pseudo_failed         | RxPFReth220               | 0<br>> > tcp.invalid_checksum      | RxPFReth220               | 0<br>> > tcp.no_flow               | RxPFReth220               | 0<br>> > tcp.reused_ssn            | RxPFReth220               | 7<br>> > tcp.memuse                | RxPFReth220               | 21511360<br>> > tcp.syn                   | RxPFReth220               | 37423<br>> > tcp.synack                | RxPFReth220               | 34159<br>> > tcp.rst                   | RxPFReth220               | 19061<br>> > tcp.segment_memcap_drop   | RxPFReth220               | 0<br>> > tcp.stream_depth_reached  | RxPFReth220               | 100<br>> > tcp.reassembly_memuse     | RxPFReth220               | 40392320000<br>> > tcp.reassembly_gap        | RxPFReth220               | 3348<br>> > http.memuse               | RxPFReth220               | 868151492<br>> > http.memcap               | RxPFReth220               | 0<br>> > detect.alert              | RxPFReth220               | 352<br>> > flow_mgr.closed_pruned    | FlowManagerThread         | 3978049<br>> > flow_mgr.new_pruned       | FlowManagerThread         | 217874<br>> > flow_mgr.est_pruned       | FlowManagerThread         | 407013<br>> > flow.memuse               | FlowManagerThread         | 5589481392<br>> > flow.spare                | FlowManagerThread         | 16000950<br>> > flow.emerg_mode_entered   | FlowManagerThread         | 0<br>> > flow.emerg_mode_over      | FlowManagerThread         | 0<br>> ><br>> <br>> Over what period of time are those stats for? (5 min/3hrs ?)<br>> <br>> ><br>> >> Date: Mon, 4 May 2015 10:13:23 +0200<br>> ><br>> >> Subject: Re: [Oisf-users] Suricata 2.1beta3 vs 2.0.7<br>> >> From: petermanev@gmail.com<br>> >> To: coolyasha@hotmail.com<br>> >> CC: modversion@gmail.com; oisf-users@lists.openinfosecfoundation.org<br>> >><br>> >> On Fri, May 1, 2015 at 9:24 PM, Yasha Zislin <coolyasha@hotmail.com><br>> >> wrote:<br>> >> > I think I've done that before and it was less that 96% of my RAM.<br>> >> ><br>> >> > All memcaps together equal to 58 gigs (I have 140gigs total RAM).<br>> >> > Also PFRING utilizes some RAM. When 2.0.7 starts it is using 50% of RAM.<br>> >> > After couple of days it gets to 96% and stays there.<br>> >><br>> >> Ok. Anything unusual in the stats.log - decoder invalid counters,<br>> >> memcaps reached, tcp gaps, emergency mode entered .. ?<br>> >><br>> >> ><br>> >> >> Date: Fri, 1 May 2015 15:15:31 +0200<br>> >> ><br>> >> >> Subject: Re: [Oisf-users] Suricata 2.1beta3 vs 2.0.7<br>> >> >> From: petermanev@gmail.com<br>> >> >> To: coolyasha@hotmail.com<br>> >> >> CC: modversion@gmail.com; oisf-users@lists.openinfosecfoundation.org<br>> >> >><br>> >> >> On Fri, May 1, 2015 at 3:05 PM, Yasha Zislin <coolyasha@hotmail.com><br>> >> >> wrote:<br>> >> >> > Correct.<br>> >> >> ><br>> >> >> > I've also tried a slight different version of the config to add<br>> >> >> > MODBUS<br>> >> >> > functionality and change toserver to dp for the ports in application<br>> >> >> > layer<br>> >> >> > detection section of the config file. I've basically compared config<br>> >> >> > that<br>> >> >> > came with the beta version to make sure things are correct and I am<br>> >> >> > no<br>> >> >> > using<br>> >> >> > depricated stuff. Either way, the same result.<br>> >> >> ><br>> >> >> > It feels like something changed with memory. beta version is only<br>> >> >> > using<br>> >> >> > about 40% of RAM but 2.0.7 is using 96%. It could be the reason for<br>> >> >> > the<br>> >> >> > packet loss on beta.<br>> >> >><br>> >> >> So is your memcap sum total in your yaml equal to that 40% or to the<br>> >> >> 96% you are mentioning? (or that is irrelevant?)<br>> >> >><br>> >> >> > Just thinking out loud.<br>> >> >> ><br>> >> >> > Thanks.<br>> >> >> ><br>> >> >> >> Date: Fri, 1 May 2015 12:10:40 +0200<br>> >> >> >> Subject: Re: [Oisf-users] Suricata 2.1beta3 vs 2.0.7<br>> >> >> >> From: petermanev@gmail.com<br>> >> >> >> To: coolyasha@hotmail.com<br>> >> >> >> CC: modversion@gmail.com; oisf-users@lists.openinfosecfoundation.org<br>> >> >> ><br>> >> >> >><br>> >> >> >> On Thu, Apr 30, 2015 at 5:13 PM, Yasha Zislin<br>> >> >> >> <coolyasha@hotmail.com><br>> >> >> >> wrote:<br>> >> >> >> > I am inspecting two span ports. Each has about 15 million packets<br>> >> >> >> > per<br>> >> >> >> > minute, mostly HTTP. Bandwidth is about 2 Gbps on each.<br>> >> >> >> ><br>> >> >> >> > I've noticed one new message on startup with beta version.<br>> >> >> >> > VLAN disabled, setting cluster type to CLUSTER_FLOW_5_TUPLE<br>> >> >> >> ><br>> >> >> >> > Not sure if this has any effect.<br>> >> >> >> ><br>> >> >> >> ><br>> >> >> >> > ________________________________<br>> >> >> >> > Date: Thu, 30 Apr 2015 23:10:09 +0800<br>> >> >> >> > Subject: Re: [Oisf-users] Suricata 2.1beta3 vs 2.0.7<br>> >> >> >> > From: modversion@gmail.com<br>> >> >> >> > To: coolyasha@hotmail.com<br>> >> >> >> > CC: oisf-users@lists.openinfosecfoundation.org<br>> >> >> >> ><br>> >> >> >> ><br>> >> >> >> > It seems that 2.0.7 work better than 2.1beta3.<br>> >> >> >> > What's the bandwidth you protect by suricata ? 10Gbps or 20Gbps ?<br>> >> >> >> ><br>> >> >> >> > 2015-04-30 23:00 GMT+08:00 Yasha Zislin <coolyasha@hotmail.com>:<br>> >> >> >> ><br>> >> >> >> > I have tweaked my configuration to have Suricata 2.0.7 run with<br>> >> >> >> > minimal<br>> >> >> >> > packet loss less than 0.01%. This set up does use a ton of RAM 95%<br>> >> >> >> > of<br>> >> >> >> > 140GB.<br>> >> >> >> > As soon as I switch to Suricata 2.1beta3 and run it with the same<br>> >> >> >> > config, I<br>> >> >> >> > get 50% packet loss but RAM utilization stays around 50%.<br>> >> >> >> ><br>> >> >> >> > What was changed to have such a big impact?<br>> >> >> >><br>> >> >> >> Just to confirm - you are running the same Suricata config the only<br>> >> >> >> thing you have changed is suricata from 2.0.7 to 2.1beta3, correct?<br>> >> >> >> (nothing else)<br>> >> >> >><br>> >> >> >> ><br>> >> >> >> > P.S. I am using PF_RING.<br>> >> >> >> ><br>> >> >> >> > Thanks.<br>> >> >> >> ><br>> >> >> >> > _______________________________________________<br>> >> >> >> > Suricata IDS Users mailing list:<br>> >> >> >> > oisf-users@openinfosecfoundation.org<br>> >> >> >> > Site: http://suricata-ids.org | Support:<br>> >> >> >> > http://suricata-ids.org/support/<br>> >> >> >> > List:<br>> >> >> >> ><br>> >> >> >> > https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users<br>> >> >> >> > Suricata User Conference November 4 & 5 in Barcelona:<br>> >> >> >> > http://oisfevents.net<br>> >> >> >> ><br>> >> >> >> ><br>> >> >> >> ><br>> >> >> >> > _______________________________________________<br>> >> >> >> > Suricata IDS Users mailing list:<br>> >> >> >> > oisf-users@openinfosecfoundation.org<br>> >> >> >> > Site: http://suricata-ids.org | Support:<br>> >> >> >> > http://suricata-ids.org/support/<br>> >> >> >> > List:<br>> >> >> >> ><br>> >> >> >> > https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users<br>> >> >> >> > Suricata User Conference November 4 & 5 in Barcelona:<br>> >> >> >> > http://oisfevents.net<br>> >> >> >><br>> >> >> >><br>> >> >> >><br>> >> >> >> --<br>> >> >> >> Regards,<br>> >> >> >> Peter Manev<br>> >> >><br>> >> >><br>> >> >><br>> >> >> --<br>> >> >> Regards,<br>> >> >> Peter Manev<br>> >><br>> >><br>> >><br>> >> --<br>> >> Regards,<br>> >> Peter Manev<br>> <br>> <br>> <br>> -- <br>> Regards,<br>> Peter Manev<br></div></div>                                          </div></body>
</html>