<html>
<head>
<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 12pt;
font-family:Calibri
}
--></style></head>
<body class='hmmessage'><div dir='ltr'>Good question. I am not sure why it didnt work before. I've changed so many settings around that unfortunately I dont know what was causing it not work. <br>I would assume that it should worked from the start. This way packet acquisition and inspection does not jump from one interface's threads to another.<br><br>Yes, what is reassembly depth and how it affects inspection.<br><br>BTW, one hour into the run (during peak time). Over a billion packets inspected, zero packet loss.<br><br>If anybody needs anymore info to get their instance working, I can gladly share.<br><br>Thanks.<br><br><div>> Date: Mon, 9 Jun 2014 19:56:41 +0200<br>> Subject: Re: [Oisf-users] Packet Loss<br>> From: petermanev@gmail.com<br>> To: coolyasha@hotmail.com<br>> CC: cnelson@ucsd.edu; oisf-users@lists.openinfosecfoundation.org<br>> <br>> On Mon, Jun 9, 2014 at 7:19 PM, Yasha Zislin <coolyasha@hotmail.com> wrote:<br>> > I've changed closed on TCP from 12 to 0.<br>> ><br>> > I've noticed interesting fact.<br>> > So I had one cluster ID for all threads for both interfaces. Packet drop<br>> > would happen on all threads eventually.<br>> > I've configured different cluster IDs for each interface and packet drop was<br>> > being observed only on one interface's threads.<br>> <br>> You mentioned that that was not possible for some reason? What was the<br>> issue (obviously it is working now)<br>> <br>> ><br>> > So currently I have 8 threads per interface, each having its own cluster ID.<br>> > Timeout for closed set to 0.<br>> > Some CPU Cores are peaking at 100% but not all.<br>> > Free Slots for threads is fluctuating but not at 0. I'll leave this running<br>> > for some time to observe.<br>> ><br>> > I am not quite clear what this depth setting does. Can you explain it to me?<br>> <br>> reassembly depth ?<br>> <br>> ><br>> > Thanks.<br>> ><br>> >> Date: Mon, 9 Jun 2014 09:28:13 -0700<br>> >> From: cnelson@ucsd.edu<br>> >> To: coolyasha@hotmail.com; petermanev@gmail.com;<br>> >> oisf-users@lists.openinfosecfoundation.org<br>> ><br>> >> Subject: Re: [Oisf-users] Packet Loss<br>> >><br>> >> -----BEGIN PGP SIGNED MESSAGE-----<br>> >> Hash: SHA1<br>> >><br>> >> Some additional tweaks to try:<br>> >><br>> >> 1. Set all of your "closed" flow-timeouts to 0.<br>> >><br>> >> 2. Set your stream -> depth to 8kb.<br>> >><br>> >> If that fixes your performance issues you can try increasing the stream<br>> >> depths until you find what the limit is for your hardware.<br>> >><br>> >> Keep in mind that suricata isn't magic and if you are pushing monster<br>> >> http flows (like we are) you may need to make some concessions on your<br>> >> current hardware. As I mentioned, one approach is to sample traffic via<br>> >> bpf filters.<br>> >><br>> >> - -Coop<br>> >><br>> >> On 6/9/2014 8:44 AM, Yasha Zislin wrote:<br>> >> > I've done some additional testing.<br>> >> ><br>> >> > I've ran pfcount with 16 threads with the same parameters as Suricata<br>> >> > does.<br>> >> > I've had only one instance of /proc/net/pf_ring instantiated but 16<br>> >> > threads in processes (TOP -H).<br>> >> ><br>> >> > I've been running it for an hour with 0 packet loss. PF_RING slot usage<br>> >> > does not go above 200 (with 688k total).<br>> >> ><br>> >> > So my packet loss occurs due to Suricata and not network/pf_ring<br>> >> > related.<br>> >> ><br>> >> > Thanks.<br>> >> ><br>> >><br>> >> - --<br>> >> Cooper Nelson<br>> >> Network Security Analyst<br>> >> UCSD ACT Security Team<br>> >> cnelson@ucsd.edu x41042<br>> >> -----BEGIN PGP SIGNATURE-----<br>> >> Version: GnuPG v2.0.17 (MingW32)<br>> >> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/<br>> >><br>> >> iQEcBAEBAgAGBQJTleCdAAoJEKIFRYQsa8FWhooH/3fJVBzitBEqmkAutukzu2V4<br>> >> 4RPdC6glK+XcztPDTwAlLhs0Q9X6x0G2qgAR0qFneKqIRActX9SkmlLQRlXyVmJF<br>> >> futSpk7TfFNHoyMMaEf2WVw5/X2GQB2PZ713ekBp77CcjxEFqy75o+n7jIMavBmf<br>> >> VcC2A549fRmG39YQIvzVNmmk9nAu+1hAnOcArNFtKOsFphgjfYUxGSPc5z8rD2Fb<br>> >> q16grR001BOa/PHU4h0WWObvhhdgNhLfmRqt2EHEhvgM3a9+4T5274zCyz+kvalA<br>> >> zUmhNVMFwtkWICgC10Ta+eivmxe3RXZR+7PjvIRVp1ancv0QzaeCqaq/bkCxftU=<br>> >> =cW4q<br>> >> -----END PGP SIGNATURE-----<br>> <br>> <br>> <br>> -- <br>> Regards,<br>> Peter Manev<br></div> </div></body>
</html>