[Oisf-devel] Cannot saturate bandwidth even with zero rules

Dave Remien dave.remien at gmail.com
Wed Nov 17 01:02:24 UTC 2010


Tommy,

Sorry it took so long to get back...

You must be using Nehalem or Westmere processors to see that kind of
throughput using ipqueue; there's a lot of packet copying that goes on under
the hood,
since ipqueue has been a wrapper function for nfqueueing since linux-2.6.14.
 With real-world traffic (i.e., university) we usually see around 300
Mbits/sec for each copy of snort, with all the preprocessing and matching it
takes (about 7000 rules) on a 2.4GHz Core2 processor.

What you're describing makes perfect sense. Every preprocessor you link into
snort, either statically or dynamically, has to look at every packet, unless
a preprocessor turns off a later preprocessor. All are on by default.

Suricata makes different assumptions, and does inspection somewhat
differently,  and does reassembly on all TCP, so I'd think that under the
conditions that snort gets 900+ Mbits/sec naked, and 700+ Mbits/sec through
the preprocessor chain, it's actually pretty good that Suricata is in the
same general area.

If, when you get to running either Suricata or snort or both in-line with a
real network, you should probably look at splitting the traffic up with
nfqueues and configuring the rules and preprocessor config for each type of
traffic to minimize the amount of inspection done, while still accomplishing
your detection goals.

Possibly in response to snort_inline's and suricata's capabilities,
snort-2.9.x now can run on nfqueues (and thus multiple instances) with the
daq package; I'd move forward if you can. Suricata is already there; but
inline mode isn't optimized yet; I think Victor and crew are working on
it.vi

Hope this helps,

Dave

On Wed, Nov 10, 2010 at 10:20 AM, Jen-Cheng(Tommy) Huang
<thnbp24 at gmail.com>wrote:

> Hi Dave,
>
> Thanks for your response.
> I was using snort 2.8 which did not use DAQ and I compiled it with
> --enable-inline.
> And the inline mode is using libipq.
> I rerun the test yesterday and I found that preprocessors in snort could
> affect the throughput.
> In normal snort inspection procedure, it has decode -> preprocess ->
> inspect phases.
> When I used packet logging mode in snort where no preprocessors is used and
> packet were only decoded, snort was able to run up to line speed (941 Mbps).
>
> The second case is that when I used IPS mode without any rules loaded, the
> default preprocessors, such as stream5, frag3, http those were still working
> and that made the throughput dropped down to 7xx Mbps.
> Does it make sense?
>
> Thanks,
> Tommy
>
>
> On Tue, Nov 9, 2010 at 8:08 PM, Dave Remien <dave.remien at gmail.com> wrote:
>
>> Tommy,
>>
>> Could you describe the snort configuration you used a little more? Snort
>> compiled with --enable-inline-init-failopen and the DAQ stuff? Or some other
>> method? If using nfq/ipq, what's your iptables config? Are you perchance
>> using jumbo packets to test with?
>>
>> I ask because I've never seen a single nfqueue instance (or ipqueue, but
>> that's really a wrapper around nfqueue) be able to forward packets at that
>> rate on any x86* platform. Well, so far.
>>
>> Cheers,
>>
>> Dave
>>
>>
>> On Tue, Nov 9, 2010 at 11:42 AM, Jen-Cheng(Tommy) Huang <
>> thnbp24 at gmail.com> wrote:
>>
>>> Hi Victor,
>>> Thanks for your suggestion.
>>> I have tried a couple values of max-pending-packets, but the throughput
>>> was at most 7xx Mbps.I've tried very large values, such as 2000, 4000, or
>>> 10000, but the throughput did not make much difference. It was all around
>>> 7xx Mbps. I am sure that I used the right config. When I changed the value
>>> to 1, the throughput dropped down to 1xx Mbps. Any other setting I should
>>> change? BTW, the command that I used was "suricata -c
>>> /etc/suricata/suricata.yaml -q 0". And I did not use dropping privilege
>>> package since I ran it as root. All rules were not loaded.
>>> Thanks.
>>>
>>> Tommy
>>>
>>>
>>>
>>> On Tue, Nov 9, 2010 at 4:21 AM, Victor Julien <victor at inliniac.net>wrote:
>>>
>>>> Jen-Cheng(Tommy) Huang wrote:
>>>> > Hi,
>>>> >
>>>> > I just tested suricata inline mode without pf_ring feature.
>>>> > My NIC is intel 1Gbps NIC.
>>>> > I used netperf TCP_MAERTS as my benchmark.
>>>> > When I removed all rules, I supposed suricata should run up to 941
>>>> Mbps
>>>> > which was what I observed in snort.
>>>> > However, I could only see around 700 Mbps. And with the default rule
>>>> set
>>>> > which I downloaded from emergingthreats.net
>>>> > <http://emergingthreats.net/>, the throughput became 4xx Mbps. The
>>>> > strange thing was all CPUs were not saturated. (intel core i7).Thus, I
>>>> > supposed the cpus were not the bottleneck. But why it couldn't
>>>> saturate
>>>> > the bandwidth?
>>>> > Any idea?
>>>>
>>>> Tommy, you could try to increase the max-pending-packets setting in
>>>> suricata.yaml. It defaults to 50. The really high speed setups I've seen
>>>> usually require a setting more in the range of 2000 to 4000. It will
>>>> cost quite a bit of extra memory though.
>>>>
>>>> Let me know if that changes anything.
>>>>
>>>> Cheers,
>>>> Victor
>>>>
>>>> --
>>>> ---------------------------------------------
>>>> Victor Julien
>>>> http://www.inliniac.net/
>>>> PGP: http://www.inliniac.net/victorjulien.asc
>>>> ---------------------------------------------
>>>>
>>>>
>>>
>>> _______________________________________________
>>> Oisf-devel mailing list
>>> Oisf-devel at openinfosecfoundation.org
>>> http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-devel
>>>
>>>
>>
>>
>> --
>> "Of course, someone who knows more about this will correct me if I'm
>> wrong, and someone who knows less will correct me if I'm right."
>> David Palmer (palmer at tybalt.caltech.edu)
>>
>>
>


-- 
"Of course, someone who knows more about this will correct me if I'm
wrong, and someone who knows less will correct me if I'm right."
David Palmer (palmer at tybalt.caltech.edu)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-devel/attachments/20101116/040703e0/attachment-0002.html>


More information about the Oisf-devel mailing list