[Oisf-users] Need help with Suricata conf
cnelson at ucsd.edu
Fri Jan 17 19:54:53 UTC 2020
I've done a lot of experimentation with this over the years. I've found that adjusting the ring size has the biggest effect on reducing dropped packets. On my deployment, doubling the ring size resulted in a consistent ~50% reduction in packet drops. I've gone up to a million packets without issue, but if you set it too high suricata will either crash or you will get an OOM error.
Block size should be an even multiple of your layer3 cache size. 1 mbyte works fine for most deployments. You can set the block timeout to zero to process packets as quickly as possible, at the expense of performance. I would probably do this in an inline or tiny deployment.
I don't think buffer size is used in v3 mode but I may be wrong about that. My personal rule is that if I don't understand a setting in a conf file, leave it at the default.
I unfortunately missed suricon this year, if I had made it something I wanted to discuss at the brainstorming session was the possibility if adding an 'auto' setting to allow for dynamic performance tuning. Ideally I would like to see suricata dynamically increase the ring buffer per worker thread instead of dropping packets, up to some configurable maximum limit. The reason being is that using the trivial IP 'sd' hashing load balancer results in an 'elephant stampede' where all flows from a host pair are directed to the same worker thread. So it may make sense to start with a ring size of 100k but allow individual threads to scale into the millions if necessary. I do know that Peter Manev observed that the 'sd' fix for the tcp packet wrong thread issue resulted in some severe packet drops on a few threads due to this effect.
If this isn't an option, it would be nice if suricata printed 'hints' when it exited regarding performance settings.
Sent from my Verizon, Samsung Galaxy smartphone
-------- Original message --------
From: Daniel Perner <daniel.perner.et at gmail.com>
Date: 1/16/20 3:36 AM (GMT-08:00)
1) ring-size: <number of packets> - Ring size will be computed with respect to max_pending_packets and number of threads. You can set manually the ring size in number of packets by setting the following value. So as I understand this value defines a cache size of each thread when running in workers mode, but when running in autofp there may be different numbers of packet capture and packet processing threads. To which type of thread does the ring-size refer in autofp mode? And when this value is not set - what is the default value?
2) tpacket_v3 has such properties as block-size and block-timeout which look a bit complicated. What should I take into consideration when trying to tune those values?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Oisf-users