[Oisf-users] Packet Loss

Yasha Zislin coolyasha at hotmail.com
Fri Jun 6 15:13:18 UTC 2014


Cooper/Peter/Victor,

Thank you for detailed response. I will comment on everything in one response.

I have reconfigured Suricata to run 8 threads for each interface instead of 16. Will see how it goes. I've tried this before and noticed that packet loss occurs faster but I've changed settings many times, so I will test again. Just to point out, with 32 threads none of my CPUs reach 100%. And I get more buffers with PF_RING (ie Slots).

In regards to increasing buffers like for Stream. I kept running out of ideas where my bottleneck is so kept increasing since I have plenty of RAM. I assume this wont hurt anything but the RAM allocation.

I've also done profilling on my traffic. It is 99.9% HTTP.

The stats that I have provided do show good packet loss (ie 0.22%) and I would be ok with this if it was the whole time.
So whenever free slots come to 0 on all threads, packet loss starts to go above 20%. Then after some time, free slots go back to full amount and packet drop stops. Over time stats improve. I am trying to figure out why I get this sudden drop in packets. What I dont understand is why my Free Slots get saturated so much.
I've configured min num slots on PF_RING to be 400,000. I know it is way higher than 65k value that it shows in every guide. But this worked. It just started to use a gig of ram for each thread. Looking at stats for Free Num Slots in each thread, it is set to 680k. It seems to work except for saturation described above. I've tried using 65k number instead but saturation occurs even faster.

I've also ran PF_COUNT tool to see packet loss and it looks good. No packets lost.
SPAN Port is 10 gig fiber card and the actual traffic does not go above 1gig. Linux kernel does not report any packet loss on these interfaces.

I've also disabled checksum offloading on the nic and in suricata config checksum checks are disabled as well.

> Date: Thu, 5 Jun 2014 20:49:53 +0200
> Subject: Re: [Oisf-users] Packet Loss
> From: petermanev at gmail.com
> To: cnelson at ucsd.edu
> CC: coolyasha at hotmail.com; oisf-users at lists.openinfosecfoundation.org
> 
> On Wed, Jun 4, 2014 at 6:52 PM, Cooper F. Nelson <cnelson at ucsd.edu> wrote:
> > -----BEGIN PGP SIGNED MESSAGE-----
> > Hash: SHA1
> >
> > Not sure if its an option, but you might want to consider doubling your
> > cores.  The published guides tend to use a 16 core system, but the
> > reality is that if you are really going for a multi-gigabit deployment
> > and zero packet drops you are going to need 32 cores at least.
> >
> > On 6/4/2014 8:04 AM, Yasha Zislin wrote:
> >> To summarize my environment and set up.
> >> I have one server (16 CPU Cores, 132gb of ram) with two span ports which
> >> I monitor with PF_RING and Suricata 2.0.1.
> >> I've configured all of the buffers pretty high. When my suricata is
> >> running it is using almost 70 gigs of RAM.
> >
> >
> > - --
> > Cooper Nelson
> > Network Security Analyst
> > UCSD ACT Security Team
> > cnelson at ucsd.edu x41042
> > -----BEGIN PGP SIGNATURE-----
> > Version: GnuPG v2.0.17 (MingW32)
> > Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
> >
> > iQEcBAEBAgAGBQJTj07DAAoJEKIFRYQsa8FWDNAH/1dGJ6iXhGtDGxCSDdSwT/NP
> > PrrF0AqJ8u97KJUZAIbVZOW5YZaFHGfW/HJuq8bpTokWEAYva7dfwIHG8MyfWT5A
> > 9ED9aZvdU+aQ0vWEkPE6pAmNj8cbLHSsUPOM1vPYlCFbJmIP9DJGA6SwwnuWNn8h
> > ChAZR5lCxdRkIEhfNFg5/fiSwHuFPZCHTS8+Qt3IyLuToQbApG0UqBM5zkOrqJm7
> > DohucMg5L28hU+xDB3m2Jv1kR7wQ5icmCME+fxwKiPWZd+rVa/EDnemnfQpiigGx
> > eOBc2Tsw+sncfv5KIcKrC9zKu5RVrf3r4mJwt5hphG66y3RbWkKazbzT+jm9Pu8=
> > =FkAe
> > -----END PGP SIGNATURE-----
> > _______________________________________________
> > Suricata IDS Users mailing list: oisf-users at openinfosecfoundation.org
> > Site: http://suricata-ids.org | Support: http://suricata-ids.org/support/
> > List: https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
> > OISF: http://www.openinfosecfoundation.org/
> 
> 
> You have 16 cores but you have provided 32 threads in total (16 eth15
> and 16 eth17)
> 
> I would suggest try using 8 and 8 respectively - if you get the same
> amount of traffic on both ethX
> 
> "I think it should be able to handle the load with 0 packet loss" -
> the fact that you have a lot of HW does not mean you should get 0
> packet loss.
> 
> The loss may occur at any given point in the traffic and/or on the way
> to the mirror port - and if it is a stream, depending where(and how
> often) in the stream the packet lost occurs, the whole stream might be
> dropped/flushed by the engine.
> 
> I mean packet loss can happen for a reason and just one of that reason
> might be memcap buffers full  - but there could be packet loss related
> to a whole lot of other stuff that has nothing to do with memcap
> buffers and/or processing...
> 
> Judging from your output of stats.log -
> 
> capture.kernel_packets    | RxPFReth1516              | 86380604
> capture.kernel_drops      | RxPFReth1516              | 195847
> 
> this is actually   - 0.22% loss..... for that particular thread. If
> all of them share the same result (with 22k rules) ... I would say you
> are doing quite alright.
> 
> Another observation - you have 64 million sessions preallocated
> "3/6/2014 -- 15:40:31 - <Info> - stream "prealloc-sessions": 2000000
> (per thread)"
> eth15 - 16x2mil,  eth17 - 16x2mil
> 
> Just a suggestion (word of advise) if I may:
> You mention you have 180k packets per second at peak times - imagine
> for  second that every single packet initiates a session and you would
> see that it is a bit of an excessive setting to have 64 mill prealloc
> sessions.
> 
> I would suggest that profiling  and exploring(knowing) your traffic is
> a much better approach than just adding 0s at the memcaps in the
> configuration settings.
> 
> 
> 
> -- 
> Regards,
> Peter Manev
 		 	   		  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20140606/1b113145/attachment-0002.html>


More information about the Oisf-users mailing list