[Oisf-users] Tuning Suricata Inline IPS performance
Hariharan Thantry
thantry at gmail.com
Thu Dec 15 04:50:10 UTC 2011
Hi Delta,
> When run suricata with pfring, with pfring bpf filter "tcp", I can
> achive 700Mb(according to load runner result)
>
Does pfring allow you to run Suricata in IPS mode? I assume that the
only socket option available for IPS is NFQUEUE.
BTW, it might help to increase your ethernet MTU. I set mine for the
10G network at 16000. This results in lower # of packets, and hence
better rates.
Also, try to turn off autoneg/pause on your ethernet interface (using
ethtool). You don't want your ethernet bridge interface back
pressuring the send side ethernet interface.
Inspite of all this, I get ~ 150 Mbps on a 10G link using NFQUEUE
(single threaded, I'm trying an MT implementation). This is with
bridge just doing an NF_ACCEPT on every packet,
Fundamentally, I think this is a netlink_nfqueue socket library issue.
I don't understand the tuning parameters there too well...will post if
things get any better.
Thanks,
Hari
On Wed, Dec 14, 2011 at 7:48 PM, Delta Yeh <delta.yeh at gmail.com> wrote:
> Hi Eric,
>
> My box is 32 bit debian squeeze with 4G mem , dual core 2.7G, work as a bridge
>
> between test client and web server.
>
> In my tests , I didn't load any rule, and use load runner to test
> http traffic only.
>
> For suricata nfq , I got similar result, about 1.5Mbps .
>
> I referered to
> http://www.inliniac.net/blog/2008/01/23/improving-snort_inlines-nfq-performance.html
> The command are:
>
> sysctl -w net.core.rmem_default=’8388608′
> sysctl -w net.core.wmem_default=’8388608′
>
> sysctl -w net.ipv4.tcp_wmem=’1048576 4194304 16777216′
> sysctl -w net.ipv4.tcp_rmem=’1048576 4194304 16777216′
>
> iptables -t mangle -F FORWARD
> iptables -t mangle -I FORWARD -p tcp -s webserver -j NFQUEUE
> --queue-balance 0:1
>
> iptables -t mangle -I FORWARD -p tcp -d webserver -j NFQUEUE
> --queue-balance 0:1
>
> suricata.yaml max-pending-packets is 5000
>
> I run suricata with " -c /etc/suricata/suricata.yaml -q 0 -q 1
> --runmode worker"
>
> I didn't test cpu affinity .
>
> When run suricata with pfring, with pfring bpf filter "tcp", I can
> achive 700Mb(according to load runner result)
>
> During test, the suricata cpu is 70%, load runner send 860,000 request,
> but suricata only record 310,000 request.
>
>
> 2011/11/23 Eric Leblond <eric at regit.org>:
>> Hello,
>>
>> Here's a list of things you can do to improve performance:
>> * use multiple queues: you can then use the multithreading
>> capability of your boxe and avoid some per-cpu lock.
>> * Increase netfilter queue length: you will then be able to resist
>> better to any burst effect
>> * Uses NFQ in worker mode: this mode were one thread does all the
>> work since capture to decision should be the most efficient for
>> NFQ (currently not in official tree, see patch provided).
>>
>> Let's you've got a multicore system with 4 CPUs.
>>
>> On Netfilter side:
>> iptables -A FORWARD -j NFQUEUE --queue-balance 0:3
>>
>> This will balance the packet between queue 0 to 3 with per-connection
>> load-balancing. Please note that in this case your injection tool must
>> be multi connection aware.
>>
>> Increase max_pending_packets in suricata.yaml to a decent value like
>> 1000 (this will use "some" memory).
>>
>> Start suricata with:
>> suricata -c suricata.yaml -q 0 -q 1 -q 2 -q 3
>>
>> If you are able to compile a suricata on your system, you can use
>> current git tree, apply the attached patch and run:
>> suricata -c suricata.yaml -q 0 -q 1 -q 2 -q 3 --runmode workers
>>
>> Next thing that can also be done is to work on cpu affinity to
>> synchronize CPU used for capture and treatment in Suricata.
>>
>> Please let us know how things are improving with that.
>>
>> BR,
>>
>> On Mon, 2011-11-21 at 17:48 -0800, Hariharan Thantry wrote:
>>> Hi Victor,
>>>
>>>
>>> I think this is not necessarily because of Suricata itself, but by the
>>> use of iptables/NFQUEUE in a purely bridged environment. (The Suricata
>>> IPS does not have an IP address for the bridge). I used the very
>>> simple NFQUEUE user space
>>> handler http://www.netfilter.org/projects/libnetfilter_queue/doxygen/nfqnl__test_8c_source.html, stopped Suricata, and kept the following iptables entry
>>>
>>>
>>> $ sudo iptables -A FORWARD -j NFQUEUE --queue-num 0
>>>
>>>
>>> and used the above program (which just puts the packet back out) on my
>>> bridge machine, and observed the same throughput speeds (~ 400 Kbps)
>>> using iperf. (Only a single connection activated)
>>>
>>>
>>> Interestingly, when I used ebtables, and its handler
>>> (ulog) http://ebtables.sourceforge.net/examples/basic.html#ex_ulog,
>>> with the ebtables FORWARD chain I observed near line rate speeds (>
>>> 9Gbps)
>>>
>>>
>>> $sudo ebtables -A FORWARD --ulog-nlgroup 1
>>>
>>>
>>> The major difference that I can see between the two handlers, is that
>>> in the case of NFQUEUE, the whole packet payload is actually copied
>>> into user space, while for the test_ulog it isn't. I tried with the
>>> NFQNL_COPY_META as well, and the speeds for that was ~ 2Mbps.
>>>
>>>
>>> I know this isn't an iptables/ebtables forum, but wondering if anyone
>>> can throw some light on this? I read this document
>>> here: http://ebtables.sourceforge.net/br_fw_ia/br_fw_ia.html, and this
>>> figure
>>> here http://ebtables.sourceforge.net/br_fw_ia/PacketFlow.png seems to
>>> suggest that the bridged packets do indeed go through the iptables
>>> filter table FORWARD chain...., so clearly, there is something that I
>>> don't have a handle on. My CPU utilization is pretty low ( ~ 8%), so
>>> that clearly isn't the issue here....
>>>
>>>
>>> Thanks,
>>> Hari
>>>
>>>
>>>
>>>
>>>
>>> On Mon, Nov 21, 2011 at 10:37 AM, Victor Julien <victor at inliniac.net>
>>> wrote:
>>> On 11/21/2011 09:00 AM, Hariharan Thantry wrote:
>>> > When I turn on Suricata (latest 1.1 release version), with
>>> the defaults,
>>> > the speeds range between 350kbps-1Mbps (using emerging
>>> threats ruleset).
>>>
>>>
>>> Those numbers are way to low. I run a 8k ruleset in nfq mode
>>> on an Atom
>>> N270 and it easily keeps up with 12mbit (which is my internet
>>> connection). So on that hardware you should see much better
>>> speeds.
>>>
>>> Do you see one of the threads hit 100% all the time?
>>>
>>> How many rules are you using? And are you using the specific
>>> Suricata ET
>>> version?
>>>
>>> --
>>> ---------------------------------------------
>>> Victor Julien
>>> http://www.inliniac.net/
>>> PGP: http://www.inliniac.net/victorjulien.asc
>>> ---------------------------------------------
>>>
>>>
>>> _______________________________________________
>>> Oisf-users mailing list
>>> Oisf-users at openinfosecfoundation.org
>>> http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>>>
>>>
>>>
>>> _______________________________________________
>>> Oisf-users mailing list
>>> Oisf-users at openinfosecfoundation.org
>>> http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>>
>> --
>> Eric Leblond
>> Blog: http://home.regit.org/
>>
>> _______________________________________________
>> Oisf-users mailing list
>> Oisf-users at openinfosecfoundation.org
>> http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>>
> _______________________________________________
> Oisf-users mailing list
> Oisf-users at openinfosecfoundation.org
> http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
More information about the Oisf-users
mailing list