[Oisf-users] Fwd: Installing / Running Suricata with Myricom NICs
Erich Lerch
erich.lerch at gmail.com
Wed Feb 21 12:46:28 UTC 2018
Jesse,
you can find out about your NUMA configuration with the command "lstopo".
It might also be called "lstopo-no-graphics" or "hwloc-ls" (as on my RHEL7
box):
...
* NUMANode L#0 (P#0 128GB)*
Package L#0 + L3 L#0 (30MB)
L2 L#0 (256KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0
...
==> these are the CPU cores for NUMA Node 0...
...
* NUMANode L#1 (P#1 128GB)*
Package L#1 + L3 L#1 (30MB)
L2 L#12 (256KB) + L1d L#12 (32KB) + L1i L#12 (32KB) + Core L#12
...
==> these are the CPU cores for NUMA Node 1...
...
HostBridge L#7
PCIBridge
PCIBridge
PCIBridge
PCI 14c1:0008
Net L#8 "enp38s0"
PCIBridge
PCI 14c1:0008
Net L#9 "enp40s0"
==> this is Myricom
...
We can see here that cores >= 12 are on the same NUMA node as the Myricom.
Now you can pin the threads to these cores in suricata.yaml, in my case:
threading:
set-cpu-affinity: yes
cpu-affinity:
- management-cpu-set:
cpu: [ 12,13,36,37 ]
- worker-cpu-set:
*cpu: [ 14,15,16,17,18,19,20,21,22,23 ]*
mode: "exclusive" # run detect threads in these cpus
threads: 10
prio:
high: [ 14,15,16,17,18,19,20,21,22,23 ]
I'm not a HW expert by no means, but I can recommend you check out Peter's
and Michal's tuning guide:
https://github.com/pevma/SEPTun
They're using Intel cards, but the basics about NUMA geometry apply as well.
Cheers,
Erich
2018-02-21 13:13 GMT+01:00 Jesse Bowling <jessebowling at gmail.com>:
> Hi Erich,
>
> > On Feb 21, 2018, at 02:44, Erich Lerch <erich.lerch at gmail.com> wrote:
> >
> > - try to pin suri worker threads to the same NUMA node the myricom is
> attached to
>
> Could you provide some detail on how you go about this; both determining
> which is which and commands/kernel options/configurations used?
>
> Thank you for the post!
>
> Cheers,
>
> Jesse
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20180221/44d3a68e/attachment-0002.html>
More information about the Oisf-users
mailing list