[Oisf-users] af-packet and Linux Kernel version
Jim Hranicky
jfh at ufl.edu
Fri Nov 18 20:01:23 UTC 2016
On 11/18/2016 11:46 AM, Peter Manev wrote:
> On Fri, Nov 18, 2016 at 4:33 PM, Jim Hranicky <jfh at ufl.edu> wrote:
>> af-packet config from suricata.yaml:
> What traffic size are you inspecting with how many rules?
> For starters with AFP -
>
> adjust the AFP section like so -
> af-packet:
> - interface: ens5f0
> cluster-id: 99
> threads: ?????(definitive number of threads here :) )
> cluster-type: cluster_flow
> defrag: yes
> #rollover: yes
> use-mmap: yes
> mmap-locked: yes
> ring-size: 100000
> #buffer-size: 65536
How many threads? I have 2 18-core CPUs for 36 cores,
72 with HT. When suri starts up it starts 72 capture
threads by default.
> make sure you adjust in suricata.yaml
>
> max-pending-packets: 65534
>
>
> In the script you need to adjust like so:
>
> ethtool -K $INT rx off
> ethtool -K $INT sg off
>
> ethtool -K $INT rxvlan off
> ethtool -K $INT txvlan off
>
> remove/comment->
> ethtool -N $INT rx-flow-hash udp4 sdfn
> ethtool -N $INT rx-flow-hash udp6 sdfn
> ethtool -N $INT rx-flow-hash tcp4 sdfn
> ethtool -N $INT rx-flow-hash tcp6 sdfn
> <- remove/comment
>
> ethtool -G $INT rx 1024
> ethtool -K $INT ntuple off
> ethtool -K $INT rxhash off
> ethtool -L $INT combined 1
>
> For irq -
Just as an aside, I haven't found any benefit to turning off
offloading, indeed it seems that when it's on I get
better numbers.
> /usr/local/bin/set_irq_affinity CPU_NUMER_HERE $INT
> example - /usr/local/bin/set_irq_affinity 1 eth3
> where CPU_NUMEBER_HERE is a core number of your choosing that is from
> the cpu residing on the same NUMA node with the NIC and where the
> Suricata worker threads are deployed as well (reading in/inspecting
> from that NIC ). That same core number should not be used for Suricata
> workers.
Is there a way to specify where the workers go?
Jim
More information about the Oisf-users
mailing list