<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body dir="auto"><div><br></div><div>On 28 mars 2016, at 19:19, Yasha Zislin <<a href="mailto:coolyasha@hotmail.com">coolyasha@hotmail.com</a>> wrote:<br><br></div><blockquote type="cite"><div>

<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 12pt;
font-family:Calibri
}
--></style>
<div dir="ltr"><div>detect-engine:</div><div>  - profile: custom</div><div>  - custom-values:</div><div>      toclient-src-groups: 200</div><div>      toclient-dst-groups: 200</div><div>      toclient-sp-groups: 200</div><div>      toclient-dp-groups: 300</div><div>      toserver-src-groups: 200</div><div>      toserver-dst-groups: 400</div><div>      toserver-sp-groups: 200</div><div>      toserver-dp-groups: 250</div><div>  - sgh-mpm-context: auto</div><div>  - inspection-recursion-limit: 3000</div><div>  # When rule-reload is enabled, sending a USR2 signal to the Suricata process</div><div>  # will trigger a live rule reload. Experimental feature, use with care.</div><div>  - rule-reload: true</div><div><br></div></div></div></blockquote><div><br></div><div>Can you please try adjusting the settings (including the yaml section itself) as per the link and recommendations I have mentioned in my previous mail.</div><div>The above  quoted part of your suricata.yaml section do not follow the dev-detect-grouping standard/change.</div><br><blockquote type="cite"><div><div dir="ltr">For pf_ring, I am using 6.3.0. On this specific sensor, I only have one interface. <div><div>Here is suricata.yaml config.</div><div>- interface: bond0</div><div>    # Number of receive threads (>1 will enable experimental flow pinned</div><div>    # runmode)</div><div>    threads: 2</div><div><br></div><div>    # Default clusterid.  PF_RING will load balance packets based on flow.</div><div>    # All threads/processes that will participate need to have the same</div><div>    # clusterid.</div><div>    cluster-id: 99</div><div><br></div></div></div></div></blockquote><blockquote type="cite"><div><div dir="ltr"><div><div>    # Default PF_RING cluster type. PF_RING can load balance per flow or per hash.</div><div>    # This is only supported in versions of PF_RING > 4.1.1.</div><div>    cluster-type: cluster_flow</div><div><br></div></div></div></div></blockquote><div><br></div><div>What is the max pending packets value you have in yaml ?</div><div><br></div><div>Thanks</div><br><blockquote type="cite"><div><div dir="ltr"><div><div>Here are pf_ring settings that I use in startup script.</div><div><br></div><div><div>        ethtool -K bond0 rx off</div><div>        ethtool -K bond0 tx off</div><div>        ethtool -K bond0 gso off</div><div>        ethtool -K bond0 gro off</div><div><br></div><div>        ethtool -C bond0 rx-usecs 500</div><div>        ethtool -G bond0 rx 4078</div><div><br></div><div>        ifconfig bond0 promisc</div><div><br></div><div>        rmmod pf_ring</div><div>        modprobe pf_ring transparent_mode=0 min_num_slots=65576 enable_tx_capture=0</div></div><div><br></div><div><hr id="stopSpelling">CC: <a href="mailto:oisf-users@lists.openinfosecfoundation.org">oisf-users@lists.openinfosecfoundation.org</a><br>From: <a href="mailto:petermanev@gmail.com">petermanev@gmail.com</a><br>Subject: Re: [Oisf-users] troubleshooting packet loss<br>Date: Mon, 28 Mar 2016 18:47:56 +0200<br>To: <a href="mailto:coolyasha@hotmail.com">coolyasha@hotmail.com</a><br><br><div><br></div><div>On 28 mars 2016, at 14:30, Yasha Zislin <<a href="mailto:coolyasha@hotmail.com">coolyasha@hotmail.com</a>> wrote:<br><br></div><blockquote><div>

<style><!--
.ExternalClass .ecxhmmessage P {
padding:0px;
}

.ExternalClass body.ecxhmmessage {
font-size:12pt;
font-family:Calibri;
}

--></style>
<div dir="ltr">I have about 2 million packets traversing per minute over 10 gig</div></div></blockquote><blockquote><div><div dir="ltr">pipe. Averaging about 500mbits in traffic.</div></div></blockquote><blockquote><div><div dir="ltr"><div>Traffic is mostly LDAP, NTP, Syslog. Not really much of HTTP/S.</div><div><br></div><div>That packet loss starts instantly. In a few minutes, you would see it. After leaving it running over the weekend, I have 19% packet loss.</div><div><br></div><div>BTW, I am using dev v190 branch for this already.</div></div></div></blockquote><div><br></div><div>What does your "detect:" section look like in suricata.yaml?</div><div><br></div><div>You can use the section here as a reference -</div><div><a href="https://github.com/inliniac/suricata/blob/dev-detect-grouping-v193/suricata.yaml.in#L594" target="_blank">https://github.com/inliniac/suricata/blob/dev-detect-grouping-v193/suricata.yaml.in#L594</a></div><div>(Please note the naming and spacing)</div><div><br></div><div><br></div><div>Are you pfring buffers full ? (Or how are they configured )</div><div>How does the config section look like for pfring for the two interfaces you have?</div><div><br></div><br><blockquote><div><div dir="ltr"><div><br></div><div>Thanks.<br><br><div>> Date: Sun, 27 Mar 2016 12:00:22 +0200<br>> Subject: Re: [Oisf-users] troubleshooting packet loss<br>> From: <a href="mailto:petermanev@gmail.com">petermanev@gmail.com</a><br>> To: <a href="mailto:coolyasha@hotmail.com">coolyasha@hotmail.com</a><br>> CC: <a href="mailto:oisf-users@lists.openinfosecfoundation.org">oisf-users@lists.openinfosecfoundation.org</a><br>> <br>> On Thu, Mar 24, 2016 at 7:02 PM, Yasha Zislin <<a href="mailto:coolyasha@hotmail.com">coolyasha@hotmail.com</a>> wrote:<br>> > I am trying to figure out where the packet loss is coming from on one of my<br>> > Suricata 3.0 sensor.<br>> > The only thing that I see weird from stats.log is that<br>> > tpc.stream_depth_reached  and tcp.reassembly_gap is somewhat high.<br>> > I am using latest PF_RING and monitoring one interface with 4 threads.<br>> > 4 logical CPUs with 16 gigs of RAM. 66% of RAM is used.<br>> <br>> What traffic speeds are those on? How many rules do you load?<br>> <br>> On the first interface there is 6.5% loss on the second 3.67%  - over<br>> what period of time was that?<br>> <br>> ><br>> > Here is stats.log info.<br>> ><br>> > Thank you<br>> ><br>> > capture.kernel_packets    | RxPFRbond01               | 34118172<br>> > capture.kernel_drops      | RxPFRbond01               | 2240130<br>> > decoder.pkts              | RxPFRbond01               | 34125944<br>> > decoder.bytes             | RxPFRbond01               | 26624108366<br>> > decoder.invalid           | RxPFRbond01               | 0<br>> > decoder.ipv4              | RxPFRbond01               | 34707873<br>> > decoder.ipv6              | RxPFRbond01               | 570<br>> > decoder.ethernet          | RxPFRbond01               | 34125944<br>> > decoder.raw               | RxPFRbond01               | 0<br>> > decoder.null              | RxPFRbond01               | 0<br>> > decoder.sll               | RxPFRbond01               | 0<br>> > decoder.tcp               | RxPFRbond01               | 23715873<br>> > decoder.udp               | RxPFRbond01               | 9702569<br>> > decoder.sctp              | RxPFRbond01               | 0<br>> > decoder.icmpv4            | RxPFRbond01               | 98456<br>> > decoder.icmpv6            | RxPFRbond01               | 0<br>> > decoder.ppp               | RxPFRbond01               | 0<br>> > decoder.pppoe             | RxPFRbond01               | 0<br>> > decoder.gre               | RxPFRbond01               | 0<br>> > decoder.vlan              | RxPFRbond01               | 0<br>> > decoder.vlan_qinq         | RxPFRbond01               | 0<br>> > decoder.teredo            | RxPFRbond01               | 570<br>> > decoder.ipv4_in_ipv6      | RxPFRbond01               | 0<br>> > decoder.ipv6_in_ipv6      | RxPFRbond01               | 0<br>> > decoder.mpls              | RxPFRbond01               | 0<br>> > decoder.avg_pkt_size      | RxPFRbond01               | 780<br>> > decoder.max_pkt_size      | RxPFRbond01               | 1514<br>> > decoder.erspan            | RxPFRbond01               | 0<br>> > flow.memcap               | RxPFRbond01               | 0<br>> > defrag.ipv4.fragments     | RxPFRbond01               | 1190975<br>> > defrag.ipv4.reassembled   | RxPFRbond01               | 592903<br>> > defrag.ipv4.timeouts      | RxPFRbond01               | 0<br>> > defrag.ipv6.fragments     | RxPFRbond01               | 0<br>> > defrag.ipv6.reassembled   | RxPFRbond01               | 0<br>> > defrag.ipv6.timeouts      | RxPFRbond01               | 0<br>> > defrag.max_frag_hits      | RxPFRbond01               | 0<br>> > tcp.sessions              | RxPFRbond01               | 169101<br>> > tcp.ssn_memcap_drop       | RxPFRbond01               | 0<br>> > tcp.pseudo                | RxPFRbond01               | 77497<br>> > tcp.pseudo_failed         | RxPFRbond01               | 0<br>> > tcp.invalid_checksum      | RxPFRbond01               | 0<br>> > <a href="http://tcp.no" target="_blank">tcp.no</a>_flow               | RxPFRbond01               | 0<br>> > tcp.syn                   | RxPFRbond01               | 180407<br>> > tcp.synack                | RxPFRbond01               | 146913<br>> > tcp.rst                   | RxPFRbond01               | 138896<br>> > tcp.segment_memcap_drop   | RxPFRbond01               | 0<br>> > tcp.stream_depth_reached  | RxPFRbond01               | 107<br>> > tcp.reassembly_gap        | RxPFRbond01               | 6765<br>> > detect.alert              | RxPFRbond01               | 3426<br>> > capture.kernel_packets    | RxPFRbond02               | 33927252<br>> > capture.kernel_drops      | RxPFRbond02               | 1246692<br>> > decoder.pkts              | RxPFRbond02               | 33932611<br>> > decoder.bytes             | RxPFRbond02               | 25571688366<br>> > decoder.invalid           | RxPFRbond02               | 0<br>> > decoder.ipv4              | RxPFRbond02               | 34483004<br>> > decoder.ipv6              | RxPFRbond02               | 506<br>> > decoder.ethernet          | RxPFRbond02               | 33932611<br>> > decoder.raw               | RxPFRbond02               | 0<br>> > decoder.null              | RxPFRbond02               | 0<br>> > decoder.sll               | RxPFRbond02               | 0<br>> > decoder.tcp               | RxPFRbond02               | 24665968<br>> > decoder.udp               | RxPFRbond02               | 8600129<br>> > decoder.sctp              | RxPFRbond02               | 0<br>> > decoder.icmpv4            | RxPFRbond02               | 113797<br>> > decoder.icmpv6            | RxPFRbond02               | 0<br>> > decoder.ppp               | RxPFRbond02               | 0<br>> > decoder.pppoe             | RxPFRbond02               | 0<br>> > decoder.gre               | RxPFRbond02               | 0<br>> > decoder.vlan              | RxPFRbond02               | 0<br>> > decoder.vlan_qinq         | RxPFRbond02               | 0<br>> > decoder.teredo            | RxPFRbond02               | 506<br>> > decoder.ipv4_in_ipv6      | RxPFRbond02               | 0<br>> > decoder.ipv6_in_ipv6      | RxPFRbond02               | 0<br>> > decoder.mpls              | RxPFRbond02               | 0<br>> > decoder.avg_pkt_size      | RxPFRbond02               | 753<br>> > decoder.max_pkt_size      | RxPFRbond02               | 1514<br>> > decoder.erspan            | RxPFRbond02               | 0<br>> > flow.memcap               | RxPFRbond02               | 0<br>> > defrag.ipv4.fragments     | RxPFRbond02               | 1103110<br>> > defrag.ipv4.reassembled   | RxPFRbond02               | 550393<br>> > defrag.ipv4.timeouts      | RxPFRbond02               | 0<br>> > defrag.ipv6.fragments     | RxPFRbond02               | 0<br>> > defrag.ipv6.reassembled   | RxPFRbond02               | 0<br>> > defrag.ipv6.timeouts      | RxPFRbond02               | 0<br>> > defrag.max_frag_hits      | RxPFRbond02               | 0<br>> > tcp.sessions              | RxPFRbond02               | 172432<br>> > tcp.ssn_memcap_drop       | RxPFRbond02               | 0<br>> > tcp.pseudo                | RxPFRbond02               | 79224<br>> > tcp.pseudo_failed         | RxPFRbond02               | 0<br>> > tcp.invalid_checksum      | RxPFRbond02               | 0<br>> > <a href="http://tcp.no" target="_blank">tcp.no</a>_flow               | RxPFRbond02               | 0<br>> > tcp.syn                   | RxPFRbond02               | 183912<br>> > tcp.synack                | RxPFRbond02               | 150219<br>> > tcp.rst                   | RxPFRbond02               | 143693<br>> > tcp.segment_memcap_drop   | RxPFRbond02               | 0<br>> > tcp.stream_depth_reached  | RxPFRbond02               | 105<br>> > tcp.reassembly_gap        | RxPFRbond02               | 4710<br>> > detect.alert              | RxPFRbond02               | 3469<br>> > capture.kernel_packets    | RxPFRbond03               | 38750498<br>> > capture.kernel_drops      | RxPFRbond03               | 1511800<br>> > decoder.pkts              | RxPFRbond03               | 38762341<br>> > decoder.bytes             | RxPFRbond03               | 32714534213<br>> > decoder.invalid           | RxPFRbond03               | 0<br>> > decoder.ipv4              | RxPFRbond03               | 39299710<br>> > decoder.ipv6              | RxPFRbond03               | 512<br>> > decoder.ethernet          | RxPFRbond03               | 38762341<br>> > decoder.raw               | RxPFRbond03               | 0<br>> > decoder.null              | RxPFRbond03               | 0<br>> > decoder.sll               | RxPFRbond03               | 0<br>> > decoder.tcp               | RxPFRbond03               | 21943466<br>> > decoder.udp               | RxPFRbond03               | 15992492<br>> > decoder.sctp              | RxPFRbond03               | 0<br>> > decoder.icmpv4            | RxPFRbond03               | 178089<br>> > decoder.icmpv6            | RxPFRbond03               | 0<br>> > decoder.ppp               | RxPFRbond03               | 0<br>> > decoder.pppoe             | RxPFRbond03               | 0<br>> > decoder.gre               | RxPFRbond03               | 0<br>> > decoder.vlan              | RxPFRbond03               | 0<br>> > decoder.vlan_qinq         | RxPFRbond03               | 0<br>> > decoder.teredo            | RxPFRbond03               | 512<br>> > decoder.ipv4_in_ipv6      | RxPFRbond03               | 0<br>> > decoder.ipv6_in_ipv6      | RxPFRbond03               | 0<br>> > decoder.mpls              | RxPFRbond03               | 0<br>> > decoder.avg_pkt_size      | RxPFRbond03               | 843<br>> > decoder.max_pkt_size      | RxPFRbond03               | 1514<br>> > decoder.erspan            | RxPFRbond03               | 0<br>> > flow.memcap               | RxPFRbond03               | 0<br>> > defrag.ipv4.fragments     | RxPFRbond03               | 1078454<br>> > defrag.ipv4.reassembled   | RxPFRbond03               | 537369<br>> > defrag.ipv4.timeouts      | RxPFRbond03               | 0<br>> > defrag.ipv6.fragments     | RxPFRbond03               | 0<br>> > defrag.ipv6.reassembled   | RxPFRbond03               | 0<br>> > defrag.ipv6.timeouts      | RxPFRbond03               | 0<br>> > defrag.max_frag_hits      | RxPFRbond03               | 0<br>> > tcp.sessions              | RxPFRbond03               | 169832<br>> > tcp.ssn_memcap_drop       | RxPFRbond03               | 0<br>> > tcp.pseudo                | RxPFRbond03               | 78504<br>> > tcp.pseudo_failed         | RxPFRbond03               | 0<br>> > tcp.invalid_checksum      | RxPFRbond03               | 0<br>> > <a href="http://tcp.no" target="_blank">tcp.no</a>_flow               | RxPFRbond03               | 0<br>> > tcp.syn                   | RxPFRbond03               | 181453<br>> > tcp.synack                | RxPFRbond03               | 147649<br>> > tcp.rst                   | RxPFRbond03               | 139792<br>> > tcp.segment_memcap_drop   | RxPFRbond03               | 0<br>> > tcp.stream_depth_reached  | RxPFRbond03               | 94<br>> > tcp.reassembly_gap        | RxPFRbond03               | 2567<br>> > detect.alert              | RxPFRbond03               | 3416<br>> > capture.kernel_packets    | RxPFRbond04               | 63727760<br>> > capture.kernel_drops      | RxPFRbond04               | 3046651<br>> > decoder.pkts              | RxPFRbond04               | 63747722<br>> > decoder.bytes             | RxPFRbond04               | 55373084583<br>> > decoder.invalid           | RxPFRbond04               | 0<br>> > decoder.ipv4              | RxPFRbond04               | 64056225<br>> > decoder.ipv6              | RxPFRbond04               | 487<br>> > decoder.ethernet          | RxPFRbond04               | 63747722<br>> > decoder.raw               | RxPFRbond04               | 0<br>> > decoder.null              | RxPFRbond04               | 0<br>> > decoder.sll               | RxPFRbond04               | 0<br>> > decoder.tcp               | RxPFRbond04               | 55855784<br>> > decoder.udp               | RxPFRbond04               | 7447497<br>> > decoder.sctp              | RxPFRbond04               | 0<br>> > decoder.icmpv4            | RxPFRbond04               | 133539<br>> > decoder.icmpv6            | RxPFRbond04               | 0<br>> > decoder.ppp               | RxPFRbond04               | 0<br>> > decoder.pppoe             | RxPFRbond04               | 0<br>> > decoder.gre               | RxPFRbond04               | 0<br>> > decoder.vlan              | RxPFRbond04               | 0<br>> > decoder.vlan_qinq         | RxPFRbond04               | 0<br>> > decoder.teredo            | RxPFRbond04               | 487<br>> > decoder.ipv4_in_ipv6      | RxPFRbond04               | 0<br>> > decoder.ipv6_in_ipv6      | RxPFRbond04               | 0<br>> > decoder.mpls              | RxPFRbond04               | 0<br>> > decoder.avg_pkt_size      | RxPFRbond04               | 868<br>> > decoder.max_pkt_size      | RxPFRbond04               | 1514<br>> > decoder.erspan            | RxPFRbond04               | 0<br>> > flow.memcap               | RxPFRbond04               | 0<br>> > defrag.ipv4.fragments     | RxPFRbond04               | 619405<br>> > defrag.ipv4.reassembled   | RxPFRbond04               | 308503<br>> > defrag.ipv4.timeouts      | RxPFRbond04               | 0<br>> > defrag.ipv6.fragments     | RxPFRbond04               | 0<br>> > defrag.ipv6.reassembled   | RxPFRbond04               | 0<br>> > defrag.ipv6.timeouts      | RxPFRbond04               | 0<br>> > defrag.max_frag_hits      | RxPFRbond04               | 0<br>> > tcp.sessions              | RxPFRbond04               | 171368<br>> > tcp.ssn_memcap_drop       | RxPFRbond04               | 0<br>> > tcp.pseudo                | RxPFRbond04               | 78609<br>> > tcp.pseudo_failed         | RxPFRbond04               | 0<br>> > tcp.invalid_checksum      | RxPFRbond04               | 0<br>> > <a href="http://tcp.no" target="_blank">tcp.no</a>_flow               | RxPFRbond04               | 0<br>> > tcp.syn                   | RxPFRbond04               | 182409<br>> > tcp.synack                | RxPFRbond04               | 149124<br>> > tcp.rst                   | RxPFRbond04               | 143473<br>> > tcp.segment_memcap_drop   | RxPFRbond04               | 0<br>> > tcp.stream_depth_reached  | RxPFRbond04               | 82<br>> > tcp.reassembly_gap        | RxPFRbond04               | 35459<br>> > detect.alert              | RxPFRbond04               | 3770<br>> > flow_mgr.closed_pruned    | FlowManagerThread         | 310602<br>> > flow_mgr.new_pruned       | FlowManagerThread         | 549722<br>> > flow_mgr.est_pruned       | FlowManagerThread         | 380334<br>> > flow.spare                | FlowManagerThread         | 799999<br>> > flow.emerg_mode_entered   | FlowManagerThread         | 0<br>> > flow.emerg_mode_over      | FlowManagerThread         | 0<br>> > flow.tcp_reuse            | FlowManagerThread         | 237<br>> > flow_mgr.closed_pruned    | FlowManagerThread         | 308878<br>> > flow_mgr.new_pruned       | FlowManagerThread         | 544586<br>> > flow_mgr.est_pruned       | FlowManagerThread         | 379393<br>> > flow.spare                | FlowManagerThread         | 799402<br>> > flow.emerg_mode_entered   | FlowManagerThread         | 0<br>> > flow.emerg_mode_over      | FlowManagerThread         | 0<br>> > flow.tcp_reuse            | FlowManagerThread         | 252<br>> > tcp.memuse                | Global                    | 439248976<br>> > tcp.reassembly_memuse     | Global                    | 1717630000<br>> > dns.memuse                | Global                    | 476478<br>> > dns.memcap_state          | Global                    | 0<br>> > dns.memcap_global         | Global                    | 0<br>> > http.memuse               | Global                    | 536216<br>> > http.memcap               | Global                    | 0<br>> > flow.memuse               | Global                    | 237040288<br>> ><br>> ><br>> > _______________________________________________<br>> > Suricata IDS Users mailing list: <a href="mailto:oisf-users@openinfosecfoundation.org">oisf-users@openinfosecfoundation.org</a><br>> > Site: <a href="http://suricata-ids.org" target="_blank">http://suricata-ids.org</a> | Support: <a href="http://suricata-ids.org/support/" target="_blank">http://suricata-ids.org/support/</a><br>> > List: <a href="https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users" target="_blank">https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>> > Suricata User Conference November 9-11 in Washington, DC:<br>> > <a href="http://oisfevents.net" target="_blank">http://oisfevents.net</a><br>> <br>> <br>> <br>> -- <br>> Regards,<br>> Peter Manev<br></div></div>                                       </div>
</div></blockquote></div></div>                                           </div>
</div></blockquote></body></html>