[Oisf-users] Packet Loss

Peter Manev petermanev at gmail.com
Mon Jun 9 08:55:04 UTC 2014


On Fri, Jun 6, 2014 at 5:13 PM, Yasha Zislin <coolyasha at hotmail.com> wrote:
> Cooper/Peter/Victor,
>
> Thank you for detailed response. I will comment on everything in one
> response.
>
> I have reconfigured Suricata to run 8 threads for each interface instead of
> 16. Will see how it goes. I've tried this before and noticed that packet
> loss occurs faster but I've changed settings many times, so I will test
> again. Just to point out, with 32 threads none of my CPUs reach 100%. And I
> get more buffers with PF_RING (ie Slots).

My concern here was that the traffic is between two ports/interfaces
so if you get traffic increase on one port the threads used for that
interface might overpower the threads on the same cpu form the other
interface and you could end up in more packet loss.
But if it works ok with the traffic at the moment - sure why not use
it like that. This is one of the benefit of Suricata's multithreading
capabilities.

>
> In regards to increasing buffers like for Stream. I kept running out of
> ideas where my bottleneck is so kept increasing since I have plenty of RAM.
> I assume this wont hurt anything but the RAM allocation.
>
> I've also done profilling on my traffic. It is 99.9% HTTP.
>
> The stats that I have provided do show good packet loss (ie 0.22%) and I
> would be ok with this if it was the whole time.
> So whenever free slots come to 0 on all threads, packet loss starts to go
> above 20%. Then after some time, free slots go back to full amount and
> packet drop stops. Over time stats improve. I am trying to figure out why I
> get this sudden drop in packets. What I dont understand is why my Free Slots
> get saturated so much.
> I've configured min num slots on PF_RING to be 400,000.

Out of curiousity how did you increase the min slots to be 400K?
What is your NIC?

> I know it is way
> higher than 65k value that it shows in every guide. But this worked. It just
> started to use a gig of ram for each thread.

That could be sumed up to 32G ram then (32 threads x 1 , corrcet?)

>Looking at stats for Free Num
> Slots in each thread, it is set to 680k.

Could you share some stats if possible? (if too  long you might want
to use pastebin or something similar)

>It seems to work except for
> saturation described above. I've tried using 65k number instead but
> saturation occurs even faster.
>
> I've also ran PF_COUNT tool to see packet loss and it looks good. No packets
> lost.
> SPAN Port is 10 gig fiber card and the actual traffic does not go above
> 1gig. Linux kernel does not report any packet loss on these interfaces.
>
> I've also disabled checksum offloading on the nic and in suricata config
> checksum checks are disabled as well.
>

So if I understand correctly - the free slots fill up, packet drops
occur and after time they go back to normal then the process is
repeated?
That would mean that the flushing should occur more often i guess or
the tcp timeout values are too big and you have long sessions?
What are your timeouts in suricata.yaml?

What is the output of:

ethtool -g eth15
and
ethtool -g eth17


 thanks

-- 
Regards,
Peter Manev



More information about the Oisf-users mailing list