[Oisf-users] Tuning Suricata Inline IPS performance
Hariharan Thantry
thantry at gmail.com
Mon Nov 21 08:00:48 UTC 2011
Hi follks,
I'm trying to squeeze out the maximum performance (throughput) from a
Suricata inline IPS forwarding machine (configured as a gateway). My setup
(for testing) is the following (all machines running stock 11.10 Ubuntu,
with extra packages as necessary):
Machine A (Client): Regular Desktop with one dual ported 10G 82599 NICs
Machine B (Bridge, hosting Suricata): An entry level Xeon with 2 dual
ported 10G 82599 NICs
http://www.newegg.com/Product/Product.aspx?Item=N82E16813131725
Machine C (Server): Regular Desktop with one dual ported 10G 82599 NIC
The forwarding performance of the bridge with the single 10G connection
active is ~9.5 Gbps (almost line rate), while with both 10G connections
active is ~ 13 Gbps
When I turn on Suricata (latest 1.1 release version), with the defaults,
the speeds range between 350kbps-1Mbps (using emerging threats ruleset). I
only have a single iptables rule that forwards all packets to the NFQUEUE
target. I have enabled nfqueue with queue-balancing turned on. I did see
the higher speed range (~1Mbps) achieved when I increased the number of
default packets for simultaneous processing to ~ 4K.
Few questions:
(a) Can one use any other lower level packet capturing infrastructure
instead of NFQUEUE (PF_RING, for example with TNAPI, for e.g.?)
(b) Is it possible at all to avoid copying the packet over, even when using
NFQUEUE (the nfqueue library seems to be allowing this..?), and thus
improve speeds
(c) Other tunable knobs (either in Suricata, or lower level tcp parameters)
that I could use to try and improve performance?
Thanks,
Hari
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20111121/5d8b048d/attachment-0002.html>
More information about the Oisf-users
mailing list