[Oisf-users] Configuration strategy for TCP segment pools/chunk pool
Peter Manev
petermanev at gmail.com
Tue May 27 20:09:00 UTC 2014
On Tue, May 27, 2014 at 4:30 AM, Darren Spruell <phatbuckett at gmail.com> wrote:
> On Sun, May 25, 2014 at 7:00 AM, Peter Manev <petermanev at gmail.com> wrote:
>> On Sun, May 25, 2014 at 11:26 AM, Darren Spruell <phatbuckett at gmail.com> wrote:
>>> Suricata 2.0 REL, Linux 3.10.40, AF_PACKET autofp runmode, 64 GB RAM.
>>>
>>> I'm gimping through some Suricata tuning and dealing with high (66%!)
>>> rates of packet loss. I have a number of limits set fairly high and am
>>> looking for signs of what else may be contributing to packet drop.
>>> Wondering currently about this type of output:
>>>
>>> 25/5/2014 -- 00:36:29 - <Info> - TCP segment pool of size 4 had a peak
>>> use of 2041 segments, more than the prealloc setting of 256
>>> 25/5/2014 -- 00:36:29 - <Info> - TCP segment pool of size 16 had a
>>> peak use of 105439 segments, more than the prealloc setting of 9216
>>> 25/5/2014 -- 00:36:29 - <Info> - TCP segment pool of size 112 had a
>>> peak use of 396057 segments, more than the prealloc setting of 30720
>>> 25/5/2014 -- 00:36:29 - <Info> - TCP segment pool of size 248 had a
>>> peak use of 189218 segments, more than the prealloc setting of 16384
>>> 25/5/2014 -- 00:36:29 - <Info> - TCP segment pool of size 512 had a
>>> peak use of 506936 segments, more than the prealloc setting of 32768
>>> 25/5/2014 -- 00:36:29 - <Info> - TCP segment pool of size 768 had a
>>> peak use of 434310 segments, more than the prealloc setting of 49152
>>> 25/5/2014 -- 00:36:29 - <Info> - TCP segment pool of size 1448 had a
>>> peak use of 961419 segments, more than the prealloc setting of 131072
>>> 25/5/2014 -- 00:36:29 - <Info> - TCP segment pool of size 65535 had a
>>> peak use of 89941 segments, more than the prealloc setting of 32768
>>> 25/5/2014 -- 00:36:29 - <Info> - TCP segment chunk pool had a peak use
>>> of 400440 chunks, more than the prealloc setting of 49152
>>>
>>> As can be seen a number of the prealloc settings have been raised from
>>> the defaults, and these were set based on a previous set of output
>>> lines from previous run where the preallocated pool size was set to be
>>> slightly higher than the peak use at that time.
>>>
>>> I don't quite understand what my aim should be with respect to these
>>> settings. Is it useful to preallocate segment pool capacity to support
>>> the peak use figures a sensor deals with? Are these segment pool
>>> settings potentially important for performance tuning? Could
>>> suboptimal settings potentially affect packet drop on a sensor?
>>>
>>> Thanks!
>>>
>>
>> Have you tried workers runmode instead of autofp? (huge perf gain in
>> my experiance)
>> How many rules are you using/loading ?
>
> I've tried a range of rules but due to the packet loss I'm currently
> loading a small number of rules for testing:
>
> 25/5/2014 -- 04:02:48 - <Info> - 6 rule files processed. 203 rules
> successfully loaded, 0 rules failed
> 25/5/2014 -- 04:02:48 - <Info> - 203 signatures processed. 0 are
> IP-only rules, 0 are inspecting packet payload, 60 inspect application
> layer, 85 are decoder event only
>
> That is these rules specifically:
>
> - decoder-events.rules
> - stream-events.rules
> - http-events.rules
> - smtp-events.rules
> - dns-events.rules
> - tls-events.rules
>
> My most recent run was using workers runmode; I managed about 50%
> packet drop still. Other config options may not be set optimally.
>
> Here's a couple of logs for more information.
>
> # most recent config dump
> http://dpaste.com/33BV70B/
>
> # suricata.log (startup/shutdown)
> http://dpaste.com/2ZV5A8J/
>
Judging by the above 2 , you are using only 2 threads for af_packet on
800Mbs....
You should try using more threads... 6 or 12 for example.
> # stats.log (last write before shutdown)
> http://dpaste.com/0HBSGGQ/
>
> # keyword_perf.log - most recent run *before* rebuilding without
> profiling to measure effect
> http://dpaste.com/3Q37ZST/
>
> # suricata build info
> http://dpaste.com/18222HE/
>
> Peak 800Mbps, 130Kpps. Other system details here:
>
> https://lists.openinfosecfoundation.org/pipermail/oisf-users/2014-May/003667.html
>
> Couple of specific questions other than the larger "why does this not
> seem to be performing well?":
>
> 26/5/2014 -- 18:35:57 - <Info> - TCP segment pool of size 4 had a peak
> use of 1565 segments, more than the prealloc setting of 256
> 26/5/2014 -- 18:35:57 - <Info> - TCP segment pool of size 16 had a
> peak use of 108003 segments, more than the prealloc setting of 9216
> 26/5/2014 -- 18:35:57 - <Info> - TCP segment pool of size 112 had a
> peak use of 410920 segments, more than the prealloc setting of 30720
> 26/5/2014 -- 18:35:57 - <Info> - TCP segment pool of size 248 had a
> peak use of 206340 segments, more than the prealloc setting of 16384
> 26/5/2014 -- 18:35:57 - <Info> - TCP segment pool of size 512 had a
> peak use of 350644 segments, more than the prealloc setting of 32768
> 26/5/2014 -- 18:35:57 - <Info> - TCP segment pool of size 768 had a
> peak use of 266721 segments, more than the prealloc setting of 49152
> 26/5/2014 -- 18:35:57 - <Info> - TCP segment pool of size 1448 had a
> peak use of 620075 segments, more than the prealloc setting of 131072
> 26/5/2014 -- 18:35:57 - <Info> - TCP segment chunk pool had a peak use
> of 446730 chunks, more than the prealloc setting of 49152
>
> Do these TCP segment pool sizes seem large? I'd wondered if I was
> overallocating previously but the peak segment use is far larger than
> preallocated. Is this "normal?"
>
> With <1Gbps monitored traffic and <200Kpps throughput, would the
> larger buffers and enhanced queues/features of a 82599 chipset offer
> any advantage in our scenario? This is nowhere close to 10G and
> ifconfig shows no or extremely few errors/drops/overruns.
>
> --
> Darren Spruell
> phatbuckett at gmail.com
--
Regards,
Peter Manev
More information about the Oisf-users
mailing list