[Oisf-users] tcp.segment_memcap_drop
Kurzawa, Kevin
kkurzawa at co.pinellas.fl.us
Fri Jun 27 16:39:25 UTC 2014
AF_PACKET STATS
I switched to AF_Packet yesterday. My CPU usage went from 45% to 95% (quad-core). RAM has been about the same. Seems to have about the same overall packet drop.
Here are the stats since the switch, 23.5 hours worth:
capture.kernel_packets | RxAFP1 | 273156168
capture.kernel_drops | RxAFP1 | 2480267
tcp.sessions | Detect | 11007982
tcp.ssn_memcap_drop | Detect | 0
tcp.segment_memcap_drop | Detect | 74074245
tcp.stream_depth_reached | Detect | 7251
tcp.reassembly_memuse | Detect | 8589934548
tcp.reassembly_gap | Detect | 5114959
SETTINGS
(Trying to include everything useful without irrelevent stuff. I didn't change anything yet. Is there something I should be editing specifically? I'm still working my way through the documentation to understand what each of these are actually for and the affect they will have.)
outputs:
- fast:
enabled: yes
stream:
memcap: 2gb
checksum-validation: yes # reject wrong csums
inline: auto # auto will use inline mode in IPS mode, yes or no set it statically
reassembly:
memcap: 2gb
depth: 1mb # reassemble 1mb into a stream
toserver-chunk-size: 2560
toclient-chunk-size: 2560
randomize-chunk-size: yes
defrag:
memcap: 128mb
hash-size: 65536
trackers: 65535 # number of defragmented flows to follow
max-frags: 65535 # number of fragments to keep (higher than trackers)
prealloc: yes
timeout: 60
flow:
memcap: 512mb
hash-size: 65536
prealloc: 20000
emergency-recovery: 30
af-packet:
- interface: bond0
threads: 4
cluster-id: 99
cluster-type: cluster_cpu
defrag: yes
use-mmap: yes
ring-size: 200000
HARDWARE
Model HP ProLiant DL360
CPU 3.4 GHz x4 (Intel Xeon)
RAM 8 GB (don't laugh)
Network traffic Peaks around 350Mbps, and usually around 200Mb
-----Original Message-----
From: oisf-users-bounces at lists.openinfosecfoundation.org [mailto:oisf-users-bounces at lists.openinfosecfoundation.org] On Behalf Of Peter Manev
Sent: Thursday, June 26, 2014 4:28 AM
To: Victor Julien
Cc: oisf-users at lists.openinfosecfoundation.org
Subject: Re: [Oisf-users] tcp.segment_memcap_drop
On Thu, Jun 26, 2014 at 9:56 AM, Peter Manev <petermanev at gmail.com> wrote:
>
>
>
> On Thu, Jun 26, 2014 at 9:22 AM, Victor Julien <lists at inliniac.net> wrote:
> > On 06/26/2014 09:19 AM, Peter Manev wrote:
> >> On Wed, Jun 25, 2014 at 3:37 PM, Kurzawa, Kevin
> >> <kkurzawa at co.pinellas.fl.us> wrote:
> >>> Using pcap because ... well, I don't know any better? I guess I don't really know the alternatives. PF Ring is the other option right?
> >>
> >> There is pcap, pf_ring and af_packet.
> >>
> >> af_packet works "out of the box", just make sure your kernel is not
> >> older than 3.2.
> >> runmode: workers seems to be the best option for af_packet.
> >>
> >> For pf_ring you need to compile and make a module, also make sure
> >> your kernel is not older than 3.0 (2.6.32 being the bare minimum)
> >> runmode: workers seems to be the best option for pf_ring as well.
> >>
> >>
> >> Our wiki provides some guidance -
> >> https://redmine.openinfosecfoundation.org/projects/suricata/wiki
> >> and then there are a number of articles on the net and on our user
> >> mail list archives regarding high perf tuning.
> >>
> >>>
> >>> Is this the potential source of the tcp.reassembly_gap?
> >>
> >> No
> >
> > Uh, yes? Packet loss is certainly a big factor in tcp.reassembly_gap.
> > Stats do show packet loss, so using a faster capture method may
> > certainly help.
> >
>
>
> It may help.
>
> Judging by the posted output ->
> The number of tcp.reassembly_gap is 4 times higher than the number of
> capture.kernel_drops Based on that I drew the conclusion.
> In my observations/experience in general most of the cases of big numbers in the reassembly gaps (and much smaller number of kernel drops) counter are due to ... well :) gaps in the traffic - either there were drops on the mirror port or there was sudden peaks/fluctuations in the traffic and the mirror port reached limits and similar things.
>
> If we look at it from purely factual perspective in this case - how can one dropped packet (and it may be any packet not just tcp) get to 4 reassembly gaps?
>
> Thanks
>
>
Kevin,
I also noticed a lot of tcp.segment_memcap_drop.
Would you be able to do a test with af_packet and increase the stream and reassembly memcaps in suricata.yaml and see how it will that affect things?
Could you please share some more info about your set up - HW, traffic..
thanks
--
Regards,
Peter Manev
_______________________________________________
Suricata IDS Users mailing list: oisf-users at openinfosecfoundation.org
Site: http://suricata-ids.org | Support: http://suricata-ids.org/support/
List: https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
OISF: http://www.openinfosecfoundation.org/
More information about the Oisf-users
mailing list