[Oisf-users] Packet Loss

Peter Manev petermanev at gmail.com
Thu Jun 5 18:49:53 UTC 2014


On Wed, Jun 4, 2014 at 6:52 PM, Cooper F. Nelson <cnelson at ucsd.edu> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Not sure if its an option, but you might want to consider doubling your
> cores.  The published guides tend to use a 16 core system, but the
> reality is that if you are really going for a multi-gigabit deployment
> and zero packet drops you are going to need 32 cores at least.
>
> On 6/4/2014 8:04 AM, Yasha Zislin wrote:
>> To summarize my environment and set up.
>> I have one server (16 CPU Cores, 132gb of ram) with two span ports which
>> I monitor with PF_RING and Suricata 2.0.1.
>> I've configured all of the buffers pretty high. When my suricata is
>> running it is using almost 70 gigs of RAM.
>
>
> - --
> Cooper Nelson
> Network Security Analyst
> UCSD ACT Security Team
> cnelson at ucsd.edu x41042
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v2.0.17 (MingW32)
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>
> iQEcBAEBAgAGBQJTj07DAAoJEKIFRYQsa8FWDNAH/1dGJ6iXhGtDGxCSDdSwT/NP
> PrrF0AqJ8u97KJUZAIbVZOW5YZaFHGfW/HJuq8bpTokWEAYva7dfwIHG8MyfWT5A
> 9ED9aZvdU+aQ0vWEkPE6pAmNj8cbLHSsUPOM1vPYlCFbJmIP9DJGA6SwwnuWNn8h
> ChAZR5lCxdRkIEhfNFg5/fiSwHuFPZCHTS8+Qt3IyLuToQbApG0UqBM5zkOrqJm7
> DohucMg5L28hU+xDB3m2Jv1kR7wQ5icmCME+fxwKiPWZd+rVa/EDnemnfQpiigGx
> eOBc2Tsw+sncfv5KIcKrC9zKu5RVrf3r4mJwt5hphG66y3RbWkKazbzT+jm9Pu8=
> =FkAe
> -----END PGP SIGNATURE-----
> _______________________________________________
> Suricata IDS Users mailing list: oisf-users at openinfosecfoundation.org
> Site: http://suricata-ids.org | Support: http://suricata-ids.org/support/
> List: https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
> OISF: http://www.openinfosecfoundation.org/


You have 16 cores but you have provided 32 threads in total (16 eth15
and 16 eth17)

I would suggest try using 8 and 8 respectively - if you get the same
amount of traffic on both ethX

"I think it should be able to handle the load with 0 packet loss" -
the fact that you have a lot of HW does not mean you should get 0
packet loss.

The loss may occur at any given point in the traffic and/or on the way
to the mirror port - and if it is a stream, depending where(and how
often) in the stream the packet lost occurs, the whole stream might be
dropped/flushed by the engine.

I mean packet loss can happen for a reason and just one of that reason
might be memcap buffers full  - but there could be packet loss related
to a whole lot of other stuff that has nothing to do with memcap
buffers and/or processing...

Judging from your output of stats.log -

capture.kernel_packets    | RxPFReth1516              | 86380604
capture.kernel_drops      | RxPFReth1516              | 195847

this is actually   - 0.22% loss..... for that particular thread. If
all of them share the same result (with 22k rules) ... I would say you
are doing quite alright.

Another observation - you have 64 million sessions preallocated
"3/6/2014 -- 15:40:31 - <Info> - stream "prealloc-sessions": 2000000
(per thread)"
eth15 - 16x2mil,  eth17 - 16x2mil

Just a suggestion (word of advise) if I may:
You mention you have 180k packets per second at peak times - imagine
for  second that every single packet initiates a session and you would
see that it is a bit of an excessive setting to have 64 mill prealloc
sessions.

I would suggest that profiling  and exploring(knowing) your traffic is
a much better approach than just adding 0s at the memcaps in the
configuration settings.



-- 
Regards,
Peter Manev



More information about the Oisf-users mailing list