Hi Dave,<br><br>Thanks for your response.<br>I was using snort 2.8 which did not use DAQ and I compiled it with --enable-inline.<br>And the inline mode is using libipq. <br>I rerun the test yesterday and I found that preprocessors in snort could affect the throughput.<br>
In normal snort inspection procedure, it has decode -> preprocess -> inspect phases.<br>When I used packet logging mode in snort where no preprocessors is used and packet were only decoded, snort was able to run up to line speed (941 Mbps). <br>
The second case is that when I used IPS mode without any rules loaded, the default preprocessors, such as stream5, frag3, http those were still working and that made the throughput dropped down to 7xx Mbps.<br>Does it make sense?<br>
<br>Thanks,<br>Tommy<br><br><div class="gmail_quote">On Tue, Nov 9, 2010 at 8:08 PM, Dave Remien <span dir="ltr"><<a href="mailto:dave.remien@gmail.com">dave.remien@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Tommy,<div><br></div><div>Could you describe the snort configuration you used a little more? Snort compiled with --enable-inline-init-failopen and the DAQ stuff? Or some other method? If using nfq/ipq, what's your iptables config? Are you perchance using jumbo packets to test with?</div>
<div><br></div><div>I ask because I've never seen a single nfqueue instance (or ipqueue, but that's really a wrapper around nfqueue) be able to forward packets at that rate on any x86* platform. Well, so far.</div>
<div><br></div><div>Cheers,</div><div><br></div><div>Dave</div><div><br><br><div class="gmail_quote"><div><div></div><div class="h5">On Tue, Nov 9, 2010 at 11:42 AM, Jen-Cheng(Tommy) Huang <span dir="ltr"><<a href="mailto:thnbp24@gmail.com" target="_blank">thnbp24@gmail.com</a>></span> wrote:<br>
</div></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><div><div></div><div class="h5">Hi Victor,<br>Thanks for your suggestion.<br>I have tried a couple values of max-pending-packets, but the throughput was at most 7xx Mbps.I've tried very large values, such as 2000, 4000, or 10000, but the throughput did not make much difference. It was all around 7xx Mbps. I am sure that I used the right config. When I changed the value to 1, the throughput dropped down to 1xx Mbps. Any other setting I should change? BTW, the command that I used was "suricata -c /etc/suricata/suricata.yaml -q 0". And I did not use dropping privilege package since I ran it as root. All rules were not loaded.<br>
Thanks.<br><font color="#888888"><br>Tommy</font><div><div></div><div><br><br><br><div class="gmail_quote">On Tue, Nov 9, 2010 at 4:21 AM, Victor Julien <span dir="ltr"><<a href="mailto:victor@inliniac.net" target="_blank">victor@inliniac.net</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div>Jen-Cheng(Tommy) Huang wrote:<br>
> Hi,<br>
><br>
> I just tested suricata inline mode without pf_ring feature.<br>
> My NIC is intel 1Gbps NIC.<br>
> I used netperf TCP_MAERTS as my benchmark.<br>
> When I removed all rules, I supposed suricata should run up to 941 Mbps<br>
> which was what I observed in snort.<br>
> However, I could only see around 700 Mbps. And with the default rule set<br>
> which I downloaded from <a href="http://emergingthreats.net" target="_blank">emergingthreats.net</a><br>
</div>> <<a href="http://emergingthreats.net/" target="_blank">http://emergingthreats.net/</a>>, the throughput became 4xx Mbps. The<br>
<div>> strange thing was all CPUs were not saturated. (intel core i7).Thus, I<br>
> supposed the cpus were not the bottleneck. But why it couldn't saturate<br>
> the bandwidth?<br>
> Any idea?<br>
<br>
</div>Tommy, you could try to increase the max-pending-packets setting in<br>
suricata.yaml. It defaults to 50. The really high speed setups I've seen<br>
usually require a setting more in the range of 2000 to 4000. It will<br>
cost quite a bit of extra memory though.<br>
<br>
Let me know if that changes anything.<br>
<br>
Cheers,<br>
Victor<br>
<font color="#888888"><br>
--<br>
---------------------------------------------<br>
Victor Julien<br>
<a href="http://www.inliniac.net/" target="_blank">http://www.inliniac.net/</a><br>
PGP: <a href="http://www.inliniac.net/victorjulien.asc" target="_blank">http://www.inliniac.net/victorjulien.asc</a><br>
---------------------------------------------<br>
<br>
</font></blockquote></div><br>
</div></div><br></div></div>_______________________________________________<br>
Oisf-devel mailing list<br>
<a href="mailto:Oisf-devel@openinfosecfoundation.org" target="_blank">Oisf-devel@openinfosecfoundation.org</a><br>
<a href="http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-devel" target="_blank">http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-devel</a><br>
<br></blockquote></div><br><br clear="all"><br>-- <br>"Of course, someone who knows more about this will correct me if I'm<br>wrong, and someone who knows less will correct me if I'm right." <br>David Palmer (<a href="mailto:palmer@tybalt.caltech.edu" target="_blank">palmer@tybalt.caltech.edu</a>)<br>
<br>
</div>
</blockquote></div><br>