[Oisf-users] Packet Loss

Peter Manev petermanev at gmail.com
Mon Jun 9 18:31:47 UTC 2014


On Mon, Jun 9, 2014 at 8:22 PM, Yasha Zislin <coolyasha at hotmail.com> wrote:
> So higher number can allow for large streams to be assembled. Also if there
> is a file associated with the stream, larger depth value would benefit file
> extraction.
> Am I understanding this correctly?

Correct - stream.reassembly.depth controls how far into a stream
reassembly is done. (see below)
Although for file extraction this is not the only setting that comes into play -
https://redmine.openinfosecfoundation.org/projects/suricata/wiki/File_Extraction


>
> Thanks.
>
>> Date: Mon, 9 Jun 2014 20:17:53 +0200
>
>> Subject: Re: [Oisf-users] Packet Loss
>> From: petermanev at gmail.com
>> To: coolyasha at hotmail.com
>> CC: cnelson at ucsd.edu; oisf-users at lists.openinfosecfoundation.org
>>
>> On Mon, Jun 9, 2014 at 8:07 PM, Yasha Zislin <coolyasha at hotmail.com>
>> wrote:
>> > Good question. I am not sure why it didnt work before. I've changed so
>> > many
>> > settings around that unfortunately I dont know what was causing it not
>> > work.
>> > I would assume that it should worked from the start. This way packet
>> > acquisition and inspection does not jump from one interface's threads to
>> > another.
>> >
>> > Yes, what is reassembly depth and how it affects inspection.
>>
>> That a stream can be reassembled up to that "depth".
>> It also affects file extraction.
>>
>> >
>> > BTW, one hour into the run (during peak time). Over a billion packets
>> > inspected, zero packet loss.
>> >
>> > If anybody needs anymore info to get their instance working, I can
>> > gladly
>> > share.
>> >
>> > Thanks.
>> >
>> >> Date: Mon, 9 Jun 2014 19:56:41 +0200
>> >
>> >> Subject: Re: [Oisf-users] Packet Loss
>> >> From: petermanev at gmail.com
>> >> To: coolyasha at hotmail.com
>> >> CC: cnelson at ucsd.edu; oisf-users at lists.openinfosecfoundation.org
>> >>
>> >> On Mon, Jun 9, 2014 at 7:19 PM, Yasha Zislin <coolyasha at hotmail.com>
>> >> wrote:
>> >> > I've changed closed on TCP from 12 to 0.
>> >> >
>> >> > I've noticed interesting fact.
>> >> > So I had one cluster ID for all threads for both interfaces. Packet
>> >> > drop
>> >> > would happen on all threads eventually.
>> >> > I've configured different cluster IDs for each interface and packet
>> >> > drop
>> >> > was
>> >> > being observed only on one interface's threads.
>> >>
>> >> You mentioned that that was not possible for some reason? What was the
>> >> issue (obviously it is working now)
>> >>
>> >> >
>> >> > So currently I have 8 threads per interface, each having its own
>> >> > cluster
>> >> > ID.
>> >> > Timeout for closed set to 0.
>> >> > Some CPU Cores are peaking at 100% but not all.
>> >> > Free Slots for threads is fluctuating but not at 0. I'll leave this
>> >> > running
>> >> > for some time to observe.
>> >> >
>> >> > I am not quite clear what this depth setting does. Can you explain it
>> >> > to
>> >> > me?
>> >>
>> >> reassembly depth ?
>> >>
>> >> >
>> >> > Thanks.
>> >> >
>> >> >> Date: Mon, 9 Jun 2014 09:28:13 -0700
>> >> >> From: cnelson at ucsd.edu
>> >> >> To: coolyasha at hotmail.com; petermanev at gmail.com;
>> >> >> oisf-users at lists.openinfosecfoundation.org
>> >> >
>> >> >> Subject: Re: [Oisf-users] Packet Loss
>> >> >>
>> >> >> -----BEGIN PGP SIGNED MESSAGE-----
>> >> >> Hash: SHA1
>> >> >>
>> >> >> Some additional tweaks to try:
>> >> >>
>> >> >> 1. Set all of your "closed" flow-timeouts to 0.
>> >> >>
>> >> >> 2. Set your stream -> depth to 8kb.
>> >> >>
>> >> >> If that fixes your performance issues you can try increasing the
>> >> >> stream
>> >> >> depths until you find what the limit is for your hardware.
>> >> >>
>> >> >> Keep in mind that suricata isn't magic and if you are pushing
>> >> >> monster
>> >> >> http flows (like we are) you may need to make some concessions on
>> >> >> your
>> >> >> current hardware. As I mentioned, one approach is to sample traffic
>> >> >> via
>> >> >> bpf filters.
>> >> >>
>> >> >> - -Coop
>> >> >>
>> >> >> On 6/9/2014 8:44 AM, Yasha Zislin wrote:
>> >> >> > I've done some additional testing.
>> >> >> >
>> >> >> > I've ran pfcount with 16 threads with the same parameters as
>> >> >> > Suricata
>> >> >> > does.
>> >> >> > I've had only one instance of /proc/net/pf_ring instantiated but
>> >> >> > 16
>> >> >> > threads in processes (TOP -H).
>> >> >> >
>> >> >> > I've been running it for an hour with 0 packet loss. PF_RING slot
>> >> >> > usage
>> >> >> > does not go above 200 (with 688k total).
>> >> >> >
>> >> >> > So my packet loss occurs due to Suricata and not network/pf_ring
>> >> >> > related.
>> >> >> >
>> >> >> > Thanks.
>> >> >> >
>> >> >>
>> >> >> - --
>> >> >> Cooper Nelson
>> >> >> Network Security Analyst
>> >> >> UCSD ACT Security Team
>> >> >> cnelson at ucsd.edu x41042
>> >> >> -----BEGIN PGP SIGNATURE-----
>> >> >> Version: GnuPG v2.0.17 (MingW32)
>> >> >> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>> >> >>
>> >> >> iQEcBAEBAgAGBQJTleCdAAoJEKIFRYQsa8FWhooH/3fJVBzitBEqmkAutukzu2V4
>> >> >> 4RPdC6glK+XcztPDTwAlLhs0Q9X6x0G2qgAR0qFneKqIRActX9SkmlLQRlXyVmJF
>> >> >> futSpk7TfFNHoyMMaEf2WVw5/X2GQB2PZ713ekBp77CcjxEFqy75o+n7jIMavBmf
>> >> >> VcC2A549fRmG39YQIvzVNmmk9nAu+1hAnOcArNFtKOsFphgjfYUxGSPc5z8rD2Fb
>> >> >> q16grR001BOa/PHU4h0WWObvhhdgNhLfmRqt2EHEhvgM3a9+4T5274zCyz+kvalA
>> >> >> zUmhNVMFwtkWICgC10Ta+eivmxe3RXZR+7PjvIRVp1ancv0QzaeCqaq/bkCxftU=
>> >> >> =cW4q
>> >> >> -----END PGP SIGNATURE-----
>> >>
>> >>
>> >>
>> >> --
>> >> Regards,
>> >> Peter Manev
>>
>>
>>
>> --
>> Regards,
>> Peter Manev



-- 
Regards,
Peter Manev



More information about the Oisf-users mailing list