[Oisf-devel] Cannot saturate bandwidth even with zero rules

Dave Remien dave.remien at gmail.com
Wed Nov 10 01:08:07 UTC 2010


Tommy,

Could you describe the snort configuration you used a little more? Snort
compiled with --enable-inline-init-failopen and the DAQ stuff? Or some other
method? If using nfq/ipq, what's your iptables config? Are you perchance
using jumbo packets to test with?

I ask because I've never seen a single nfqueue instance (or ipqueue, but
that's really a wrapper around nfqueue) be able to forward packets at that
rate on any x86* platform. Well, so far.

Cheers,

Dave


On Tue, Nov 9, 2010 at 11:42 AM, Jen-Cheng(Tommy) Huang
<thnbp24 at gmail.com>wrote:

> Hi Victor,
> Thanks for your suggestion.
> I have tried a couple values of max-pending-packets, but the throughput was
> at most 7xx Mbps.I've tried very large values, such as 2000, 4000, or 10000,
> but the throughput did not make much difference. It was all around 7xx Mbps.
> I am sure that I used the right config. When I changed the value to 1, the
> throughput dropped down to 1xx Mbps. Any other setting I should change? BTW,
> the command that I used was "suricata -c /etc/suricata/suricata.yaml -q 0".
> And I did not use dropping privilege package since I ran it as root. All
> rules were not loaded.
> Thanks.
>
> Tommy
>
>
>
> On Tue, Nov 9, 2010 at 4:21 AM, Victor Julien <victor at inliniac.net> wrote:
>
>> Jen-Cheng(Tommy) Huang wrote:
>> > Hi,
>> >
>> > I just tested suricata inline mode without pf_ring feature.
>> > My NIC is intel 1Gbps NIC.
>> > I used netperf TCP_MAERTS as my benchmark.
>> > When I removed all rules, I supposed suricata should run up to 941 Mbps
>> > which was what I observed in snort.
>> > However, I could only see around 700 Mbps. And with the default rule set
>> > which I downloaded from emergingthreats.net
>> > <http://emergingthreats.net/>, the throughput became 4xx Mbps. The
>> > strange thing was all CPUs were not saturated. (intel core i7).Thus, I
>> > supposed the cpus were not the bottleneck. But why it couldn't saturate
>> > the bandwidth?
>> > Any idea?
>>
>> Tommy, you could try to increase the max-pending-packets setting in
>> suricata.yaml. It defaults to 50. The really high speed setups I've seen
>> usually require a setting more in the range of 2000 to 4000. It will
>> cost quite a bit of extra memory though.
>>
>> Let me know if that changes anything.
>>
>> Cheers,
>> Victor
>>
>> --
>> ---------------------------------------------
>> Victor Julien
>> http://www.inliniac.net/
>> PGP: http://www.inliniac.net/victorjulien.asc
>> ---------------------------------------------
>>
>>
>
> _______________________________________________
> Oisf-devel mailing list
> Oisf-devel at openinfosecfoundation.org
> http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-devel
>
>


-- 
"Of course, someone who knows more about this will correct me if I'm
wrong, and someone who knows less will correct me if I'm right."
David Palmer (palmer at tybalt.caltech.edu)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-devel/attachments/20101109/9f5e0bb8/attachment-0002.html>


More information about the Oisf-devel mailing list