[Oisf-users] Tuning Suricata (2.0beta1) -- no rules and lots of packet loss
Tritium Cat
tritium.cat at gmail.com
Wed Aug 14 19:38:47 UTC 2013
Hello.
Yes good idea, I just read about that problem here:
https://lists.openinfosecfoundation.org/pipermail/oisf-users/2013-July/002724.html
I'll give 1.4.4 a try.
--TC
On Wed, Aug 14, 2013 at 11:43 AM, rmkml <rmkml at yahoo.fr> wrote:
> Hi Tritium and Cooper,
>
> Maybe it's related to recently speak on list for new dns preproc cause
> drop, I don't known if it possible to disable actually.
> Maybe back to v1.4.4 for comparing ?
>
> Regards
> @Rmkml
>
>
>
> On Wed, 14 Aug 2013, Cooper F. Nelson wrote:
>
> -----BEGIN PGP SIGNED MESSAGE-----
>> Hash: SHA1
>>
>> #Edit: I wrote the following before checking your suricata.yaml. Try
>> #not setting the "buffer-size:" directive under the af-packet
>> #configuration. Setting that seems to cause packet drops in AF_PACKET
>> #+ mmap mode.
>>
>> What does your traffic look like? Do you have any high volume, single
>> flows? e.g. two hosts doing a sustained >500mbit transfer?
>>
>> Suricata has an inherent limitation in that even on a tuned,
>> high-performance system it can only process so many packets per second,
>> per thread. So if one (or more flows) are saturating a given core, the
>> ring buffer will eventually fill and packets will be dropped.
>>
>> You can see this behavior using 'top' (press '1' to see all cores).
>>
>> This is my production box. The cores that are 0% idle are at capacity
>> and will eventually drop packets if the active flows fill the ring buffer:
>>
>> top - 17:36:00 up 22 days, 18:12, 6 users, load average: 14.71, 14.38,
>>> 14.34
>>> Tasks: 211 total, 1 running, 210 sleeping, 0 stopped, 0 zombie
>>> %Cpu0 : 72.8 us, 1.3 sy, 0.0 ni, 14.6 id, 0.0 wa, 0.0 hi, 11.3 si,
>>> 0.0 st
>>> %Cpu1 : 89.7 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 10.3 si,
>>> 0.0 st
>>> %Cpu2 : 89.4 us, 0.3 sy, 0.0 ni, 1.0 id, 0.0 wa, 0.0 hi, 9.3 si,
>>> 0.0 st
>>> %Cpu3 : 88.4 us, 1.0 sy, 0.0 ni, 2.7 id, 0.0 wa, 0.0 hi, 8.0 si,
>>> 0.0 st
>>> %Cpu4 : 80.4 us, 6.3 sy, 0.0 ni, 5.6 id, 0.0 wa, 0.0 hi, 7.6 si,
>>> 0.0 st
>>> %Cpu5 : 80.7 us, 0.3 sy, 0.0 ni, 9.3 id, 0.0 wa, 0.0 hi, 9.6 si,
>>> 0.0 st
>>> %Cpu6 : 90.0 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 10.0 si,
>>> 0.0 st
>>> %Cpu7 : 91.4 us, 0.0 sy, 0.0 ni, 2.0 id, 0.0 wa, 0.0 hi, 6.6 si,
>>> 0.0 st
>>> %Cpu8 : 79.1 us, 1.0 sy, 0.0 ni, 11.3 id, 0.0 wa, 0.0 hi, 8.6 si,
>>> 0.0 st
>>> %Cpu9 : 67.8 us, 1.3 sy, 0.0 ni, 24.3 id, 0.0 wa, 0.0 hi, 6.6 si,
>>> 0.0 st
>>> %Cpu10 : 90.0 us, 0.0 sy, 0.0 ni, 0.3 id, 0.0 wa, 0.0 hi, 9.6 si,
>>> 0.0 st
>>> %Cpu11 : 90.7 us, 0.3 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 9.0 si,
>>> 0.0 st
>>> %Cpu12 : 85.4 us, 0.7 sy, 0.0 ni, 5.3 id, 0.0 wa, 0.0 hi, 8.6 si,
>>> 0.0 st
>>> %Cpu13 : 68.1 us, 0.3 sy, 0.0 ni, 23.3 id, 0.0 wa, 0.0 hi, 8.3 si,
>>> 0.0 st
>>> %Cpu14 : 91.4 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 8.6 si,
>>> 0.0 st
>>> %Cpu15 : 69.8 us, 2.0 sy, 0.0 ni, 20.9 id, 0.0 wa, 0.0 hi, 7.3 si,
>>> 0.0 st
>>> KiB Mem: 49457008 total, 49298704 used, 158304 free, 188764 buffers
>>> KiB Swap: 0 total, 0 used, 0 free, 24121808 cached
>>>
>>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+
>>> COMMAND
>>> 3991 root 20 0 22.606g 0.021t 0.013t S 1400 45.50 3352:06
>>> /usr/bin/suric+
>>>
>>
>> There are some things you can do addresses this.
>>
>> One, use AF_PACKET + mmap mode and set a large ring buffer:
>>
>> ring-size: 524288
>>>
>>
>> I've found if you go too far above this suricata will generate errors
>> about allocating memory (which it will eventually recover from). I
>> believe this is an issue with the linux kernel and not suricata. Be
>> careful not to run your box out of memory.
>>
>> As mentioned, you should also set large receive queues via sysctl.conf:
>>
>> net.core.netdev_max_backlog = 10000000
>>> net.core.rmem_default = 1073741824
>>> net.core.rmem_max = 1073741824
>>>
>>
>> You should also set short timeouts (e.g. 5 seconds) and lower the stream
>> depth. I have mine set to 2mb.
>>
>> Another thing you can do is if you can identify the large, boring data
>> flows (like backups or other bulk transfers) you can use BPF filters to
>> prevent those packets from being processed.
>>
>> - -Coop
>>
>> On 8/14/2013 10:26 AM, Tritium Cat wrote:
>>
>>>
>>> Where I'm confused most is why Suricata is dropping so many packets with
>>> no
>>> rules enabled. The "AF-Packet" link below used this approach to find a
>>> stable point before adding rules. I've tuned the Intel cards as
>>> recommended with setpci, ethtool, and ixgbe parameters. The system has
>>> also been tuned with various sysctl tweaks to match other's
>>> recommendations
>>> (_rmem, _wmem, backlog, etc...) as well as set_irq_affinity to balance
>>> the
>>> interrupts among all CPU cores. (see attached files)
>>>
>>> Any help is much appreciated... thanks !
>>>
>>> --TC
>>>
>>>
>>
>> - --
>> Cooper Nelson
>> Network Security Analyst
>> UCSD ACT Security Team
>> cnelson at ucsd.edu x41042
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20130814/6209b5d1/attachment-0002.html>
More information about the Oisf-users
mailing list