Hi follks,<div><br></div><div>I'm trying to squeeze out the maximum performance (throughput) from a Suricata inline IPS forwarding machine (configured as a gateway). My setup (for testing) is the following (all machines running stock 11.10 Ubuntu, with extra packages as necessary):</div>
<div><br></div><div>Machine A (Client): Regular Desktop with one dual ported 10G 82599 NICs</div><div>Machine B (Bridge, hosting Suricata): An entry level Xeon with 2 dual ported 10G 82599 NICs <a href="http://www.newegg.com/Product/Product.aspx?Item=N82E16813131725">http://www.newegg.com/Product/Product.aspx?Item=N82E16813131725</a></div>
<div>Machine C (Server): Regular Desktop with one dual ported 10G 82599 NIC</div><div><br></div><div>The forwarding performance of the bridge with the single 10G connection active is ~9.5 Gbps (almost line rate), while with both 10G connections active is ~ 13 Gbps</div>
<div><br></div><div>When I turn on Suricata (latest 1.1 release version), with the defaults, the speeds range between 350kbps-1Mbps (using emerging threats ruleset). I only have a single iptables rule that forwards all packets to the NFQUEUE target. I have enabled nfqueue with queue-balancing turned on. I did see the higher speed range (~1Mbps) achieved when I increased the number of default packets for simultaneous processing to ~ 4K.</div>
<div><br></div><div>Few questions:</div><div><br></div><div>(a) Can one use any other lower level packet capturing infrastructure instead of NFQUEUE (PF_RING, for example with TNAPI, for e.g.?)</div><div>(b) Is it possible at all to avoid copying the packet over, even when using NFQUEUE (the nfqueue library seems to be allowing this..?), and thus improve speeds</div>
<div>(c) Other tunable knobs (either in Suricata, or lower level tcp parameters) that I could use to try and improve performance?</div><div><br></div><div><br></div><div>Thanks,</div><div>Hari</div><div><br></div>