[Oisf-users] af-packet and Linux Kernel version

Peter Manev petermanev at gmail.com
Fri Nov 18 16:46:52 UTC 2016

On Fri, Nov 18, 2016 at 4:33 PM, Jim Hranicky <jfh at ufl.edu> wrote:
> af-packet config from suricata.yaml:
>   af-packet:
>     - interface: ens5f0
>       cluster-id: 99
>       cluster-type: cluster_flow
>       defrag: yes
>       rollover: yes
>       use-mmap: yes
>       mmap-locked: yes
>       ring-size: 4096
>       buffer-size: 65536
> Attached is the startup script I'm using, slighly modified version
> of one Coop sent me.

What traffic size are you inspecting with how many rules?
For starters with AFP -

adjust the AFP section like so -
    - interface: ens5f0
      cluster-id: 99
      threads: ?????(definitive number of threads here :) )
      cluster-type: cluster_flow
      defrag: yes
      #rollover: yes
      use-mmap: yes
      mmap-locked: yes
      ring-size: 100000
      #buffer-size: 65536

make sure you adjust in suricata.yaml

max-pending-packets: 65534

In the script  you need to adjust like so:

ethtool -K $INT rx off
ethtool -K $INT sg off

ethtool -K $INT rxvlan off
ethtool -K $INT txvlan off

ethtool -N $INT rx-flow-hash udp4 sdfn
ethtool -N $INT rx-flow-hash udp6 sdfn
ethtool -N $INT rx-flow-hash tcp4 sdfn
ethtool -N $INT rx-flow-hash tcp6 sdfn
<- remove/comment

ethtool -G $INT rx 1024
ethtool -K $INT ntuple off
ethtool -K $INT rxhash off
ethtool -L $INT combined 1

 For irq -

/usr/local/bin/set_irq_affinity CPU_NUMER_HERE $INT
example  - /usr/local/bin/set_irq_affinity 1 eth3
where CPU_NUMEBER_HERE is a core number of your choosing that is from
the cpu residing on the same NUMA node with the NIC and where the
Suricata worker threads are deployed as well (reading in/inspecting
from that NIC ). That same core number should not be used for Suricata

Please let us know how it goes.

> Thanks for your help.
> Jim
> On 11/18/2016 10:16 AM, Peter Manev wrote:
>> Feel free to share(privately if you would like ) your config/set
>> up/stats so i (we) can have a look of the AFP set up you have.

Peter Manev

More information about the Oisf-users mailing list