[Oisf-users] suricata with PF_RING Zero Copy/Pinned CPUs
Chris Wakelin
cwakelin at emergingthreats.net
Wed Aug 10 23:40:08 UTC 2016
I had this working when I still worked for the University of Reading
(which I left a year ago to join ET/Proofpoint). Alas I don't have
access to a machine running PF_RING at the moment.
I did have a message thread in November 2014 about ZC + hugepages and
Suricata on the PF_RING (ntop-misc) mailing list
(http://lists.ntop.org/mailman/listinfo/ntop-misc - you need to
subscribe to see archives though).
It looks like I had (uisng pfdnacluser_master rather than zbalance_ipc)
insmod ixgbe.ko RSS=1,1 mtu=1522 adapters_to_enable=xx:xx:xx:xx:xx:xx
num_rx_slots=32768 num_tx_slots=0 numa_cpu_affinity=1,1
ifconfig up dna0
echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
cat /proc/meminfo | grep Huge
mount -t hugetlbfs none /mnt/huge
pfdnacluster_master -i dna0 -c 1 -n 15,1 -r 15 -m 4 -u /mnt/huge -d
I was running ARGUS (flow-capture) on dnacl:1 at 15
My Suricata config looked like (I know the cluster settings are ignored):-
pfring:
- interface: dnacl:1 at 0
threads: 1
cluster-id: 99
cluster-type: cluster_flow
- interface: dnacl:1 at 1
threads: 1
cluster-id: 99
cluster-type: cluster_flow
...
- interface: dnacl:1 at 14
threads: 1
cluster-id: 99
cluster-type: cluster_flow
My problem with hugepages turned out to be having Suricata running as a
non-root user meant it didn't get the right permissions to access them.
I said I was going to investigate a fix in Suricata to drop privileges
later, but seems I never got around to it. I ran Suricata as root
instead as a workaround (obviously not ideal). I also had CPUs with 16
real cores and hyperthreading disabled (that latter I now understand was
probably a bad idea!)
Hope this gives some useful pointers,
Best Wishes,
Chris
On 10/08/16 18:07, Jim Hranicky wrote:
> I'm able to run and get good results with using multiple threads
> on a pf-enabled interface when not running in ZC mode. I'm a little
> stumped though as to how to configure zbalance_ipc/suricata to use
> multiple threads using ZC.
>
> When run 1 queue for suri
>
> ./zbalance_ipc -i zc:enp4s0 -m 4 -n 1,1 -c 99 -g 0 -S 1
>
> then specify the interface like so
>
> - interface: zc:99 at 0
> threads: 22
>
> and run this command
>
> /opt/suricata/bin/suricata -i zc:99 at 0 -c /opt/suricata/etc/suricata/suricata.yaml --pfring -vv
>
> I get this:
>
> 10/8/2016 -- 13:00:01 - <Perf> - (RX#01) Using PF_RING v.6.5.0,
> interface zc:99 at 0, cluster-id 1
>
> 10/8/2016 -- 13:00:01 - <Error> - [ERRCODE: SC_ERR_PF_RING_OPEN(34)] -
> Failed to open zc:99 at 0: pfring_open error. Check if zc:99 at 0 exists and pf_ring module is loaded.
>
> 10/8/2016 -- 13:00:01 - <Error> - [ERRCODE: SC_ERR_PF_RING_OPEN(34)] -
> Failed to open zc:99 at 0: pfring_open error. Check if zc:99 at 0 exists and pf_ring module is loaded.
>
> 10/8/2016 -- 13:00:01 - <Error> - [ERRCODE: SC_ERR_PF_RING_OPEN(34)] -
> Failed to open zc:99 at 0: pfring_open error. Check if zc:99 at 0 exists and pf_ring module is loaded.
>
> Should I run zbalance_ipc with multiple queues? How do I specify the interfaces on
> the CL and the config file? FWIW I seem to get about 40% more events per second
> when running with multiple threads over running with 1 ZC queue.
>
> Thanks,
>
> --
> Jim Hranicky
> Data Security Specialist
> UF Information Technology
> 105 NW 16TH ST Room #104 GAINESVILLE FL 32603-1826
> 352-273-1341
> _______________________________________________
> Suricata IDS Users mailing list: oisf-users at openinfosecfoundation.org
> Site: http://suricata-ids.org | Support: http://suricata-ids.org/support/
> List: https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
> Suricata User Conference November 9-11 in Washington, DC: http://oisfevents.net
>
More information about the Oisf-users
mailing list