Tommy,<div><br></div><div>Sorry it took so long to get back... </div><div><br></div><div>You must be using Nehalem or Westmere processors to see that kind of throughput using ipqueue; there's a lot of packet copying that goes on under the hood, </div>
<div>since ipqueue has been a wrapper function for nfqueueing since linux-2.6.14. With real-world traffic (i.e., university) we usually see around 300 Mbits/sec for each copy of snort, with all the preprocessing and matching it takes (about 7000 rules) on a 2.4GHz Core2 processor.</div>
<div><br></div><div>What you're describing makes perfect sense. Every preprocessor you link into snort, either statically or dynamically, has to look at every packet, unless a preprocessor turns off a later preprocessor. All are on by default.</div>
<div><br></div><div>Suricata makes different assumptions, and does inspection somewhat differently, and does reassembly on all TCP, so I'd think that under the conditions that snort gets 900+ Mbits/sec naked, and 700+ Mbits/sec through the preprocessor chain, it's actually pretty good that Suricata is in the same general area.</div>
<div><br></div><div>If, when you get to running either Suricata or snort or both in-line with a real network, you should probably look at splitting the traffic up with nfqueues and configuring the rules and preprocessor config for each type of traffic to minimize the amount of inspection done, while still accomplishing your detection goals.</div>
<div><br></div><div>Possibly in response to snort_inline's and suricata's capabilities, snort-2.9.x now can run on nfqueues (and thus multiple instances) with the daq package; I'd move forward if you can. Suricata is already there; but inline mode isn't optimized yet; I think Victor and crew are working on <a href="http://it.vi">it.vi</a></div>
<div><br></div><div>Hope this helps,</div><div><br></div><div>Dave</div><div><br><div class="gmail_quote">On Wed, Nov 10, 2010 at 10:20 AM, Jen-Cheng(Tommy) Huang <span dir="ltr"><<a href="mailto:thnbp24@gmail.com">thnbp24@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">Hi Dave,<br><br>Thanks for your response.<br>I was using snort 2.8 which did not use DAQ and I compiled it with --enable-inline.<br>
And the inline mode is using libipq. <br>I rerun the test yesterday and I found that preprocessors in snort could affect the throughput.<br>
In normal snort inspection procedure, it has decode -> preprocess -> inspect phases.<br>When I used packet logging mode in snort where no preprocessors is used and packet were only decoded, snort was able to run up to line speed (941 Mbps). <br>
The second case is that when I used IPS mode without any rules loaded, the default preprocessors, such as stream5, frag3, http those were still working and that made the throughput dropped down to 7xx Mbps.<br>Does it make sense?<br>
<br>Thanks,<br><font color="#888888">Tommy</font><div><div></div><div class="h5"><br><br><div class="gmail_quote">On Tue, Nov 9, 2010 at 8:08 PM, Dave Remien <span dir="ltr"><<a href="mailto:dave.remien@gmail.com" target="_blank">dave.remien@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="border-left:1px solid rgb(204, 204, 204);margin:0pt 0pt 0pt 0.8ex;padding-left:1ex">
Tommy,<div><br></div><div>Could you describe the snort configuration you used a little more? Snort compiled with --enable-inline-init-failopen and the DAQ stuff? Or some other method? If using nfq/ipq, what's your iptables config? Are you perchance using jumbo packets to test with?</div>
<div><br></div><div>I ask because I've never seen a single nfqueue instance (or ipqueue, but that's really a wrapper around nfqueue) be able to forward packets at that rate on any x86* platform. Well, so far.</div>
<div><br></div><div>Cheers,</div><div><br></div><div>Dave</div><div><br><br><div class="gmail_quote"><div><div></div><div>On Tue, Nov 9, 2010 at 11:42 AM, Jen-Cheng(Tommy) Huang <span dir="ltr"><<a href="mailto:thnbp24@gmail.com" target="_blank">thnbp24@gmail.com</a>></span> wrote:<br>
</div></div><blockquote class="gmail_quote" style="border-left:1px solid rgb(204, 204, 204);margin:0pt 0pt 0pt 0.8ex;padding-left:1ex"><div><div></div><div>Hi Victor,<br>Thanks for your suggestion.<br>I have tried a couple values of max-pending-packets, but the throughput was at most 7xx Mbps.I've tried very large values, such as 2000, 4000, or 10000, but the throughput did not make much difference. It was all around 7xx Mbps. I am sure that I used the right config. When I changed the value to 1, the throughput dropped down to 1xx Mbps. Any other setting I should change? BTW, the command that I used was "suricata -c /etc/suricata/suricata.yaml -q 0". And I did not use dropping privilege package since I ran it as root. All rules were not loaded.<br>
Thanks.<br><font color="#888888"><br>Tommy</font><div><div></div><div><br><br><br><div class="gmail_quote">On Tue, Nov 9, 2010 at 4:21 AM, Victor Julien <span dir="ltr"><<a href="mailto:victor@inliniac.net" target="_blank">victor@inliniac.net</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="border-left:1px solid rgb(204, 204, 204);margin:0pt 0pt 0pt 0.8ex;padding-left:1ex">
<div>Jen-Cheng(Tommy) Huang wrote:<br>
> Hi,<br>
><br>
> I just tested suricata inline mode without pf_ring feature.<br>
> My NIC is intel 1Gbps NIC.<br>
> I used netperf TCP_MAERTS as my benchmark.<br>
> When I removed all rules, I supposed suricata should run up to 941 Mbps<br>
> which was what I observed in snort.<br>
> However, I could only see around 700 Mbps. And with the default rule set<br>
> which I downloaded from <a href="http://emergingthreats.net" target="_blank">emergingthreats.net</a><br>
</div>> <<a href="http://emergingthreats.net/" target="_blank">http://emergingthreats.net/</a>>, the throughput became 4xx Mbps. The<br>
<div>> strange thing was all CPUs were not saturated. (intel core i7).Thus, I<br>
> supposed the cpus were not the bottleneck. But why it couldn't saturate<br>
> the bandwidth?<br>
> Any idea?<br>
<br>
</div>Tommy, you could try to increase the max-pending-packets setting in<br>
suricata.yaml. It defaults to 50. The really high speed setups I've seen<br>
usually require a setting more in the range of 2000 to 4000. It will<br>
cost quite a bit of extra memory though.<br>
<br>
Let me know if that changes anything.<br>
<br>
Cheers,<br>
Victor<br>
<font color="#888888"><br>
--<br>
---------------------------------------------<br>
Victor Julien<br>
<a href="http://www.inliniac.net/" target="_blank">http://www.inliniac.net/</a><br>
PGP: <a href="http://www.inliniac.net/victorjulien.asc" target="_blank">http://www.inliniac.net/victorjulien.asc</a><br>
---------------------------------------------<br>
<br>
</font></blockquote></div><br>
</div></div><br></div></div>_______________________________________________<br>
Oisf-devel mailing list<br>
<a href="mailto:Oisf-devel@openinfosecfoundation.org" target="_blank">Oisf-devel@openinfosecfoundation.org</a><br>
<a href="http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-devel" target="_blank">http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-devel</a><br>
<br></blockquote></div><br><br clear="all"><br>-- <br>"Of course, someone who knows more about this will correct me if I'm<br>wrong, and someone who knows less will correct me if I'm right." <br>David Palmer (<a href="mailto:palmer@tybalt.caltech.edu" target="_blank">palmer@tybalt.caltech.edu</a>)<br>
<br>
</div>
</blockquote></div><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br>"Of course, someone who knows more about this will correct me if I'm<br>wrong, and someone who knows less will correct me if I'm right." <br>
David Palmer (<a href="mailto:palmer@tybalt.caltech.edu">palmer@tybalt.caltech.edu</a>)<br><br>
</div>