[Oisf-users] Inline NFQ

Xavier Romero XRomero at nexica.com
Thu Oct 8 09:19:12 UTC 2015


Hello,

Another user explained me that there is a limitation at using single connection. I can confirm that each test achieves the same result, no matter if I there is 1 test running, or 4.
So, I guess that each connection is limited to… I don’t know what, since I can multiply total throughput by running several tests in parallel, so it’s not a system limitation (ram, cpu, disk, etc)…

Anyway I’ve checked that counters, they look good.

[root at suricata]# cat /proc/net/netfilter/nfnetlink_queue
    0   5405     0 2 65531     0     0    47862  1
    1  -4139     0 2 65531     0     0   332937  1

Best regards,
Xavier Romero

De: oisf-users-bounces at lists.openinfosecfoundation.org [mailto:oisf-users-bounces at lists.openinfosecfoundation.org] En nombre de Peter Fyon
Enviado el: dijous, 8 d'octubre de 2015 4:44
Para: oisf-users at lists.openinfosecfoundation.org
Asunto: Re: [Oisf-users] Inline NFQ


Have you checked your nfnetlink_queue for dropped packets?

From https://home.regit.org/netfilter-en/using-nfqueue-and-libnetfilter_queue/ :

nfnetlink_queue entry in /proc

nfnetlink_queue has a dedicated entry in /proc: /proc/net/netfilter/nfnetlink_queue

cat /proc/net/netfilter/nfnetlink_queue

   40  23948     0 2 65531     0     0      106  1
The content is the following:
·         queue number
·         peer portid: good chance it is process ID of software listening to the queue
·         queue total: current number of packets waiting in the queue
·         copy mode: 0 and 1 only message only provide meta data. If 2 message provide a part of packet of size copy range.
·         copy range: length of packet data to put in message
·         queue dropped: number of packets dropped because queue was full
·         user dropped: number of packets dropped because netlink message could not be sent to userspace. If this counter is not zero, try to increase netlink buffer size. On the application side, you will see gap in packet id if netlink message are lost.
·         id sequence: packet id of last packet
·         1
I've never tried to run suricata in IPS mode using netfilter queues, but I did run snort for a while like that. I recall that the maximum queue length (on a ubuntu machine, at least) was 5000 packets. You could up this with a sysctl somehow, but I don't remember the setting offhand.

Peter

On Oct 7, 2015 9:44 AM, "Xavier Romero" <XRomero at nexica.com<mailto:XRomero at nexica.com>> wrote:
Hello,

I’m successfully running Suricata (detection mode) for a long time in a dedicated physical machine, processing about 2 Gbps with no problem.
Now I need to set up another Suricata box for inline mode as a virtual machine (just for 50Mbps), it’s a small VM (CentOS 7, 2 CPUs & 2 GB RAM) but it should be enough thought. I set up iptables that way:

iptables -I FORWARD -j NFQUEUE --queue-bypass --queue-balance 0:1

My problem is, when I start Suricata on inline mode, the network throughput drops dramatically…  I run internet speed test

[15:24:30][admin at test ~]$ ./speedtest-cli
Retrieving speedtest.net<http://speedtest.net> configuration...
Retrieving speedtest.net<http://speedtest.net> server list...
Selecting best server based on latency...
Hosted by masmovil (Madrid) [0.00 km]: 13.574 ms
Testing download speed........................................
Download: 602.14 Mbit/s
Testing upload speed..................................................
Upload: 136.50 Mbit/s

[15:25:02][root at suricata ~]$ systemctl start suricata

[15:25:24][admin at test ~]$ ./speedtest-cli
Retrieving speedtest.net<http://speedtest.net> configuration...
Retrieving speedtest.net<http://speedtest.net> server list...
Selecting best server based on latency...
Hosted by masmovil (Madrid) [0.00 km]: 10.948 ms
Testing download speed........................................
Download: 14.18 Mbit/s
Testing upload speed..................................................
Upload: 3.08 Mbit/s

I’ve tried with 1 and 2 queues (-q 0 –q 1), and in both autofp and workers mode, no matter… always same results. Suricata threads does not consume much CPU, so it does not look like I need more cores.

Neither dmesg, journctl, /var/log/messages nor suricata logs are complaing about anything.

I’ve no idea where to look or what to try. Any suggestion will be wellcome.

Best regards,
Xavier Romero

_______________________________________________
Suricata IDS Users mailing list: oisf-users at openinfosecfoundation.org<mailto:oisf-users at openinfosecfoundation.org>
Site: http://suricata-ids.org | Support: http://suricata-ids.org/support/
List: https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
Suricata User Conference November 4 & 5 in Barcelona: http://oisfevents.net
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20151008/97740ba1/attachment-0002.html>


More information about the Oisf-users mailing list