[Oisf-users] troubleshooting packet loss
Yasha Zislin
coolyasha at hotmail.com
Thu Mar 31 13:01:08 UTC 2016
Peter,
max_packets is set to 65000.
So I've set my detect-engine to profile medium and default groups like in your documentation.Packet loss from the start is 96%
CC: oisf-users at lists.openinfosecfoundation.org
From: petermanev at gmail.com
Subject: Re: [Oisf-users] troubleshooting packet loss
Date: Mon, 28 Mar 2016 19:40:50 +0200
To: coolyasha at hotmail.com
On 28 mars 2016, at 19:19, Yasha Zislin <coolyasha at hotmail.com> wrote:
detect-engine: - profile: custom - custom-values: toclient-src-groups: 200 toclient-dst-groups: 200 toclient-sp-groups: 200 toclient-dp-groups: 300 toserver-src-groups: 200 toserver-dst-groups: 400 toserver-sp-groups: 200 toserver-dp-groups: 250 - sgh-mpm-context: auto - inspection-recursion-limit: 3000 # When rule-reload is enabled, sending a USR2 signal to the Suricata process # will trigger a live rule reload. Experimental feature, use with care. - rule-reload: true
Can you please try adjusting the settings (including the yaml section itself) as per the link and recommendations I have mentioned in my previous mail.The above quoted part of your suricata.yaml section do not follow the dev-detect-grouping standard/change.
For pf_ring, I am using 6.3.0. On this specific sensor, I only have one interface. Here is suricata.yaml config.- interface: bond0 # Number of receive threads (>1 will enable experimental flow pinned # runmode) threads: 2
# Default clusterid. PF_RING will load balance packets based on flow. # All threads/processes that will participate need to have the same # clusterid. cluster-id: 99
# Default PF_RING cluster type. PF_RING can load balance per flow or per hash. # This is only supported in versions of PF_RING > 4.1.1. cluster-type: cluster_flow
What is the max pending packets value you have in yaml ?
Thanks
Here are pf_ring settings that I use in startup script.
ethtool -K bond0 rx off ethtool -K bond0 tx off ethtool -K bond0 gso off ethtool -K bond0 gro off
ethtool -C bond0 rx-usecs 500 ethtool -G bond0 rx 4078
ifconfig bond0 promisc
rmmod pf_ring modprobe pf_ring transparent_mode=0 min_num_slots=65576 enable_tx_capture=0
CC: oisf-users at lists.openinfosecfoundation.org
From: petermanev at gmail.com
Subject: Re: [Oisf-users] troubleshooting packet loss
Date: Mon, 28 Mar 2016 18:47:56 +0200
To: coolyasha at hotmail.com
On 28 mars 2016, at 14:30, Yasha Zislin <coolyasha at hotmail.com> wrote:
I have about 2 million packets traversing per minute over 10 gigpipe. Averaging about 500mbits in traffic.Traffic is mostly LDAP, NTP, Syslog. Not really much of HTTP/S.
That packet loss starts instantly. In a few minutes, you would see it. After leaving it running over the weekend, I have 19% packet loss.
BTW, I am using dev v190 branch for this already.
What does your "detect:" section look like in suricata.yaml?
You can use the section here as a reference -https://github.com/inliniac/suricata/blob/dev-detect-grouping-v193/suricata.yaml.in#L594(Please note the naming and spacing)
Are you pfring buffers full ? (Or how are they configured )How does the config section look like for pfring for the two interfaces you have?
Thanks.
> Date: Sun, 27 Mar 2016 12:00:22 +0200
> Subject: Re: [Oisf-users] troubleshooting packet loss
> From: petermanev at gmail.com
> To: coolyasha at hotmail.com
> CC: oisf-users at lists.openinfosecfoundation.org
>
> On Thu, Mar 24, 2016 at 7:02 PM, Yasha Zislin <coolyasha at hotmail.com> wrote:
> > I am trying to figure out where the packet loss is coming from on one of my
> > Suricata 3.0 sensor.
> > The only thing that I see weird from stats.log is that
> > tpc.stream_depth_reached and tcp.reassembly_gap is somewhat high.
> > I am using latest PF_RING and monitoring one interface with 4 threads.
> > 4 logical CPUs with 16 gigs of RAM. 66% of RAM is used.
>
> What traffic speeds are those on? How many rules do you load?
>
> On the first interface there is 6.5% loss on the second 3.67% - over
> what period of time was that?
>
> >
> > Here is stats.log info.
> >
> > Thank you
> >
> > capture.kernel_packets | RxPFRbond01 | 34118172
> > capture.kernel_drops | RxPFRbond01 | 2240130
> > decoder.pkts | RxPFRbond01 | 34125944
> > decoder.bytes | RxPFRbond01 | 26624108366
> > decoder.invalid | RxPFRbond01 | 0
> > decoder.ipv4 | RxPFRbond01 | 34707873
> > decoder.ipv6 | RxPFRbond01 | 570
> > decoder.ethernet | RxPFRbond01 | 34125944
> > decoder.raw | RxPFRbond01 | 0
> > decoder.null | RxPFRbond01 | 0
> > decoder.sll | RxPFRbond01 | 0
> > decoder.tcp | RxPFRbond01 | 23715873
> > decoder.udp | RxPFRbond01 | 9702569
> > decoder.sctp | RxPFRbond01 | 0
> > decoder.icmpv4 | RxPFRbond01 | 98456
> > decoder.icmpv6 | RxPFRbond01 | 0
> > decoder.ppp | RxPFRbond01 | 0
> > decoder.pppoe | RxPFRbond01 | 0
> > decoder.gre | RxPFRbond01 | 0
> > decoder.vlan | RxPFRbond01 | 0
> > decoder.vlan_qinq | RxPFRbond01 | 0
> > decoder.teredo | RxPFRbond01 | 570
> > decoder.ipv4_in_ipv6 | RxPFRbond01 | 0
> > decoder.ipv6_in_ipv6 | RxPFRbond01 | 0
> > decoder.mpls | RxPFRbond01 | 0
> > decoder.avg_pkt_size | RxPFRbond01 | 780
> > decoder.max_pkt_size | RxPFRbond01 | 1514
> > decoder.erspan | RxPFRbond01 | 0
> > flow.memcap | RxPFRbond01 | 0
> > defrag.ipv4.fragments | RxPFRbond01 | 1190975
> > defrag.ipv4.reassembled | RxPFRbond01 | 592903
> > defrag.ipv4.timeouts | RxPFRbond01 | 0
> > defrag.ipv6.fragments | RxPFRbond01 | 0
> > defrag.ipv6.reassembled | RxPFRbond01 | 0
> > defrag.ipv6.timeouts | RxPFRbond01 | 0
> > defrag.max_frag_hits | RxPFRbond01 | 0
> > tcp.sessions | RxPFRbond01 | 169101
> > tcp.ssn_memcap_drop | RxPFRbond01 | 0
> > tcp.pseudo | RxPFRbond01 | 77497
> > tcp.pseudo_failed | RxPFRbond01 | 0
> > tcp.invalid_checksum | RxPFRbond01 | 0
> > tcp.no_flow | RxPFRbond01 | 0
> > tcp.syn | RxPFRbond01 | 180407
> > tcp.synack | RxPFRbond01 | 146913
> > tcp.rst | RxPFRbond01 | 138896
> > tcp.segment_memcap_drop | RxPFRbond01 | 0
> > tcp.stream_depth_reached | RxPFRbond01 | 107
> > tcp.reassembly_gap | RxPFRbond01 | 6765
> > detect.alert | RxPFRbond01 | 3426
> > capture.kernel_packets | RxPFRbond02 | 33927252
> > capture.kernel_drops | RxPFRbond02 | 1246692
> > decoder.pkts | RxPFRbond02 | 33932611
> > decoder.bytes | RxPFRbond02 | 25571688366
> > decoder.invalid | RxPFRbond02 | 0
> > decoder.ipv4 | RxPFRbond02 | 34483004
> > decoder.ipv6 | RxPFRbond02 | 506
> > decoder.ethernet | RxPFRbond02 | 33932611
> > decoder.raw | RxPFRbond02 | 0
> > decoder.null | RxPFRbond02 | 0
> > decoder.sll | RxPFRbond02 | 0
> > decoder.tcp | RxPFRbond02 | 24665968
> > decoder.udp | RxPFRbond02 | 8600129
> > decoder.sctp | RxPFRbond02 | 0
> > decoder.icmpv4 | RxPFRbond02 | 113797
> > decoder.icmpv6 | RxPFRbond02 | 0
> > decoder.ppp | RxPFRbond02 | 0
> > decoder.pppoe | RxPFRbond02 | 0
> > decoder.gre | RxPFRbond02 | 0
> > decoder.vlan | RxPFRbond02 | 0
> > decoder.vlan_qinq | RxPFRbond02 | 0
> > decoder.teredo | RxPFRbond02 | 506
> > decoder.ipv4_in_ipv6 | RxPFRbond02 | 0
> > decoder.ipv6_in_ipv6 | RxPFRbond02 | 0
> > decoder.mpls | RxPFRbond02 | 0
> > decoder.avg_pkt_size | RxPFRbond02 | 753
> > decoder.max_pkt_size | RxPFRbond02 | 1514
> > decoder.erspan | RxPFRbond02 | 0
> > flow.memcap | RxPFRbond02 | 0
> > defrag.ipv4.fragments | RxPFRbond02 | 1103110
> > defrag.ipv4.reassembled | RxPFRbond02 | 550393
> > defrag.ipv4.timeouts | RxPFRbond02 | 0
> > defrag.ipv6.fragments | RxPFRbond02 | 0
> > defrag.ipv6.reassembled | RxPFRbond02 | 0
> > defrag.ipv6.timeouts | RxPFRbond02 | 0
> > defrag.max_frag_hits | RxPFRbond02 | 0
> > tcp.sessions | RxPFRbond02 | 172432
> > tcp.ssn_memcap_drop | RxPFRbond02 | 0
> > tcp.pseudo | RxPFRbond02 | 79224
> > tcp.pseudo_failed | RxPFRbond02 | 0
> > tcp.invalid_checksum | RxPFRbond02 | 0
> > tcp.no_flow | RxPFRbond02 | 0
> > tcp.syn | RxPFRbond02 | 183912
> > tcp.synack | RxPFRbond02 | 150219
> > tcp.rst | RxPFRbond02 | 143693
> > tcp.segment_memcap_drop | RxPFRbond02 | 0
> > tcp.stream_depth_reached | RxPFRbond02 | 105
> > tcp.reassembly_gap | RxPFRbond02 | 4710
> > detect.alert | RxPFRbond02 | 3469
> > capture.kernel_packets | RxPFRbond03 | 38750498
> > capture.kernel_drops | RxPFRbond03 | 1511800
> > decoder.pkts | RxPFRbond03 | 38762341
> > decoder.bytes | RxPFRbond03 | 32714534213
> > decoder.invalid | RxPFRbond03 | 0
> > decoder.ipv4 | RxPFRbond03 | 39299710
> > decoder.ipv6 | RxPFRbond03 | 512
> > decoder.ethernet | RxPFRbond03 | 38762341
> > decoder.raw | RxPFRbond03 | 0
> > decoder.null | RxPFRbond03 | 0
> > decoder.sll | RxPFRbond03 | 0
> > decoder.tcp | RxPFRbond03 | 21943466
> > decoder.udp | RxPFRbond03 | 15992492
> > decoder.sctp | RxPFRbond03 | 0
> > decoder.icmpv4 | RxPFRbond03 | 178089
> > decoder.icmpv6 | RxPFRbond03 | 0
> > decoder.ppp | RxPFRbond03 | 0
> > decoder.pppoe | RxPFRbond03 | 0
> > decoder.gre | RxPFRbond03 | 0
> > decoder.vlan | RxPFRbond03 | 0
> > decoder.vlan_qinq | RxPFRbond03 | 0
> > decoder.teredo | RxPFRbond03 | 512
> > decoder.ipv4_in_ipv6 | RxPFRbond03 | 0
> > decoder.ipv6_in_ipv6 | RxPFRbond03 | 0
> > decoder.mpls | RxPFRbond03 | 0
> > decoder.avg_pkt_size | RxPFRbond03 | 843
> > decoder.max_pkt_size | RxPFRbond03 | 1514
> > decoder.erspan | RxPFRbond03 | 0
> > flow.memcap | RxPFRbond03 | 0
> > defrag.ipv4.fragments | RxPFRbond03 | 1078454
> > defrag.ipv4.reassembled | RxPFRbond03 | 537369
> > defrag.ipv4.timeouts | RxPFRbond03 | 0
> > defrag.ipv6.fragments | RxPFRbond03 | 0
> > defrag.ipv6.reassembled | RxPFRbond03 | 0
> > defrag.ipv6.timeouts | RxPFRbond03 | 0
> > defrag.max_frag_hits | RxPFRbond03 | 0
> > tcp.sessions | RxPFRbond03 | 169832
> > tcp.ssn_memcap_drop | RxPFRbond03 | 0
> > tcp.pseudo | RxPFRbond03 | 78504
> > tcp.pseudo_failed | RxPFRbond03 | 0
> > tcp.invalid_checksum | RxPFRbond03 | 0
> > tcp.no_flow | RxPFRbond03 | 0
> > tcp.syn | RxPFRbond03 | 181453
> > tcp.synack | RxPFRbond03 | 147649
> > tcp.rst | RxPFRbond03 | 139792
> > tcp.segment_memcap_drop | RxPFRbond03 | 0
> > tcp.stream_depth_reached | RxPFRbond03 | 94
> > tcp.reassembly_gap | RxPFRbond03 | 2567
> > detect.alert | RxPFRbond03 | 3416
> > capture.kernel_packets | RxPFRbond04 | 63727760
> > capture.kernel_drops | RxPFRbond04 | 3046651
> > decoder.pkts | RxPFRbond04 | 63747722
> > decoder.bytes | RxPFRbond04 | 55373084583
> > decoder.invalid | RxPFRbond04 | 0
> > decoder.ipv4 | RxPFRbond04 | 64056225
> > decoder.ipv6 | RxPFRbond04 | 487
> > decoder.ethernet | RxPFRbond04 | 63747722
> > decoder.raw | RxPFRbond04 | 0
> > decoder.null | RxPFRbond04 | 0
> > decoder.sll | RxPFRbond04 | 0
> > decoder.tcp | RxPFRbond04 | 55855784
> > decoder.udp | RxPFRbond04 | 7447497
> > decoder.sctp | RxPFRbond04 | 0
> > decoder.icmpv4 | RxPFRbond04 | 133539
> > decoder.icmpv6 | RxPFRbond04 | 0
> > decoder.ppp | RxPFRbond04 | 0
> > decoder.pppoe | RxPFRbond04 | 0
> > decoder.gre | RxPFRbond04 | 0
> > decoder.vlan | RxPFRbond04 | 0
> > decoder.vlan_qinq | RxPFRbond04 | 0
> > decoder.teredo | RxPFRbond04 | 487
> > decoder.ipv4_in_ipv6 | RxPFRbond04 | 0
> > decoder.ipv6_in_ipv6 | RxPFRbond04 | 0
> > decoder.mpls | RxPFRbond04 | 0
> > decoder.avg_pkt_size | RxPFRbond04 | 868
> > decoder.max_pkt_size | RxPFRbond04 | 1514
> > decoder.erspan | RxPFRbond04 | 0
> > flow.memcap | RxPFRbond04 | 0
> > defrag.ipv4.fragments | RxPFRbond04 | 619405
> > defrag.ipv4.reassembled | RxPFRbond04 | 308503
> > defrag.ipv4.timeouts | RxPFRbond04 | 0
> > defrag.ipv6.fragments | RxPFRbond04 | 0
> > defrag.ipv6.reassembled | RxPFRbond04 | 0
> > defrag.ipv6.timeouts | RxPFRbond04 | 0
> > defrag.max_frag_hits | RxPFRbond04 | 0
> > tcp.sessions | RxPFRbond04 | 171368
> > tcp.ssn_memcap_drop | RxPFRbond04 | 0
> > tcp.pseudo | RxPFRbond04 | 78609
> > tcp.pseudo_failed | RxPFRbond04 | 0
> > tcp.invalid_checksum | RxPFRbond04 | 0
> > tcp.no_flow | RxPFRbond04 | 0
> > tcp.syn | RxPFRbond04 | 182409
> > tcp.synack | RxPFRbond04 | 149124
> > tcp.rst | RxPFRbond04 | 143473
> > tcp.segment_memcap_drop | RxPFRbond04 | 0
> > tcp.stream_depth_reached | RxPFRbond04 | 82
> > tcp.reassembly_gap | RxPFRbond04 | 35459
> > detect.alert | RxPFRbond04 | 3770
> > flow_mgr.closed_pruned | FlowManagerThread | 310602
> > flow_mgr.new_pruned | FlowManagerThread | 549722
> > flow_mgr.est_pruned | FlowManagerThread | 380334
> > flow.spare | FlowManagerThread | 799999
> > flow.emerg_mode_entered | FlowManagerThread | 0
> > flow.emerg_mode_over | FlowManagerThread | 0
> > flow.tcp_reuse | FlowManagerThread | 237
> > flow_mgr.closed_pruned | FlowManagerThread | 308878
> > flow_mgr.new_pruned | FlowManagerThread | 544586
> > flow_mgr.est_pruned | FlowManagerThread | 379393
> > flow.spare | FlowManagerThread | 799402
> > flow.emerg_mode_entered | FlowManagerThread | 0
> > flow.emerg_mode_over | FlowManagerThread | 0
> > flow.tcp_reuse | FlowManagerThread | 252
> > tcp.memuse | Global | 439248976
> > tcp.reassembly_memuse | Global | 1717630000
> > dns.memuse | Global | 476478
> > dns.memcap_state | Global | 0
> > dns.memcap_global | Global | 0
> > http.memuse | Global | 536216
> > http.memcap | Global | 0
> > flow.memuse | Global | 237040288
> >
> >
> > _______________________________________________
> > Suricata IDS Users mailing list: oisf-users at openinfosecfoundation.org
> > Site: http://suricata-ids.org | Support: http://suricata-ids.org/support/
> > List: https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
> > Suricata User Conference November 9-11 in Washington, DC:
> > http://oisfevents.net
>
>
>
> --
> Regards,
> Peter Manev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20160331/7bb35c53/attachment-0002.html>
More information about the Oisf-users
mailing list