[Oisf-users] tcp.segment_memcap_drop

Peter Manev petermanev at gmail.com
Thu Jun 26 08:28:21 UTC 2014


On Thu, Jun 26, 2014 at 9:56 AM, Peter Manev <petermanev at gmail.com> wrote:
>
>
>
> On Thu, Jun 26, 2014 at 9:22 AM, Victor Julien <lists at inliniac.net> wrote:
> > On 06/26/2014 09:19 AM, Peter Manev wrote:
> >> On Wed, Jun 25, 2014 at 3:37 PM, Kurzawa, Kevin
> >> <kkurzawa at co.pinellas.fl.us> wrote:
> >>> Using pcap because ... well, I don't know any better? I guess I don't really know the alternatives. PF Ring is the other option right?
> >>
> >> There is pcap, pf_ring and af_packet.
> >>
> >> af_packet works "out of the box", just make sure your kernel is not
> >> older than 3.2.
> >> runmode: workers seems to be the best option for af_packet.
> >>
> >> For pf_ring you need to compile and make a module, also make sure your
> >> kernel is not older than 3.0 (2.6.32 being the bare minimum)
> >> runmode: workers seems to be the best option for pf_ring as well.
> >>
> >>
> >> Our wiki provides some guidance -
> >> https://redmine.openinfosecfoundation.org/projects/suricata/wiki
> >> and then there are a number of articles on the net and on our user
> >> mail list archives regarding high perf tuning.
> >>
> >>>
> >>> Is this the potential source of the tcp.reassembly_gap?
> >>
> >> No
> >
> > Uh, yes? Packet loss is certainly a big factor in tcp.reassembly_gap.
> > Stats do show packet loss, so using a faster capture method may
> > certainly help.
> >
>
>
> It may help.
>
> Judging by the posted output ->
> The number of tcp.reassembly_gap is 4 times higher than the number of capture.kernel_drops
> Based on that I drew the conclusion.
> In my observations/experience in general most of the cases of big numbers in the reassembly gaps (and much smaller number of kernel drops) counter are due to ... well :) gaps in the traffic - either there were drops on the mirror port or there was sudden peaks/fluctuations in the traffic and the mirror port reached limits and similar things.
>
> If we look at it from purely factual perspective in this case - how can one dropped packet (and it may be any packet not just tcp) get to 4 reassembly gaps?
>
> Thanks
>
>

Kevin,

I also noticed a lot of tcp.segment_memcap_drop.
Would you be able to do a test with  af_packet and increase the stream
and reassembly memcaps in suricata.yaml and see how it will that
affect things?

Could you please share some more info about your set up - HW, traffic..

thanks

-- 
Regards,
Peter Manev



More information about the Oisf-users mailing list