[Oisf-devel] suricata & two-way traffic
Mahnaz Talebi
mhnz.talebi at gmail.com
Wed Jul 16 08:45:57 UTC 2014
It seems that the problem is caused by sending all packets using one tx
queue.
On Wed, Jul 16, 2014 at 12:40 PM, Mahnaz Talebi <mhnz.talebi at gmail.com>
wrote:
> Hi & Thanks for your replays.
> I find that all tx interrupts handle by one cpu and smp_affinity only
> influence on receive packets by different cpus.
> I examine different cluster-type and runmode compositions but the results
> were not much different.
> I use this command for run suricata:
> suricata -c /etc/suricata/suricata.yaml --af-packet
>
> I use igb driver, active RSS and use /proc/irq/irq#/smp_affinity for
> setting interrupt affinity.
>
> Each tx/rx peer has same interrupt number:
>
> cat /proc/interrupts | grep p115p3
> 116: 1 0 0 0 0
> 0 0 0 PCI-MSI-edge p115p3
> 117: 430880 0 0 0 0
> 0 0 0 PCI-MSI-edge p115p3-TxRx-0
> 118: 52 400648 0 0 0
> 0 0 0 PCI-MSI-edge p115p3-TxRx-1
> 119: 52 0 395915 0 0
> 0 0 0 PCI-MSI-edge p115p3-TxRx-2
> 120: 63 0 0 376400 0
> 0 0 0 PCI-MSI-edge p115p3-TxRx-3
> 121: 58 0 0 0 384137
> 0 0 0 PCI-MSI-edge p115p3-TxRx-4
> 122: 52 0 0 0 0
> 393522 0 0 PCI-MSI-edge p115p3-TxRx-5
> 123: 52 0 0 0 0 0
> 394861 0 PCI-MSI-edge p115p3-TxRx-6
> 124: 52 0 0 0 0
> 0 0 389069 PCI-MSI-edge p115p3-TxRx-7
>
> I set smp_affinity for two devices that peer together, but for sender
> device (p115p4) I have: (receiver device is p115p3)
> cat /proc/interrupts | grep p115p4
> 125: 1 0 0 0 0
> 0 0 0 PCI-MSI-edge p115p4
> 126: 1165 0 0 0 0
> 0 0 0 PCI-MSI-edge p115p4-TxRx-0
> 127: 51 1099 0 0 0
> 0 0 0 PCI-MSI-edge p115p4-TxRx-1
> 128: 51 0 1099 0 0
> 0 0 0 PCI-MSI-edge p115p4-TxRx-2
> 129: 53 0 0 1099 0
> 0 0 0 PCI-MSI-edge p115p4-TxRx-3
> 130: 57 0 0 0 1099
> 0 0 0 PCI-MSI-edge p115p4-TxRx-4
> 131: 51 0 0 0 0
> 1220633 0 0 PCI-MSI-edge p115p4-TxRx-5
> 132: 58 0 0 0 0
> 0 1099 0 PCI-MSI-edge p115p4-TxRx-6
> 133: 51 0 0 0 0
> 0 0 1099 PCI-MSI-edge p115p4-TxRx-7
>
>
> > Hello Mahnaz,
> >
> > have you tried changing the cluster_type to cluster_flow ?
> > I'm not sure this can help but maybe it worth to check
> >
> > best regards
> > vito
>
> >
> > On 07/15/2014 12:04 PM, Mahnaz Talebi wrote:
> >> Is there anyone who can help me to solve this problem?
>
> >What does your af-packet section in suricata.yaml look like?
> >Do you have affinity set up for both sniffing interfaces?
> >How do you start Suricata (command line)?
>
>
> >>
> >>
> >> On Tue, Jul 8, 2014 at 5:17 PM, Mahnaz Talebi <mhnz.talebi at gmail.com
> <mailto:mhnz.talebi at gmail.com>
> > wrote:
> >>
> >> Hi all,
> >>
> >>
> >> I am trying to evalute suricata's behavior, when sending traffic
> for two interface that peer together in af-packet mode. I use tcpreplay for
> sending traffic to these interfaces with rate 950Mbps.
> >> I use RSS & smp_affinity for distribute flows between cpus and use
> workers runmode and cluster-cpu as cluster-type in af-packet mode.
> >> when I send traffic for one of peered interfaces(p115p3), drop rate
> is 0%, and top -H report is :
> >>
> >> Cpu0 : 0.0%us, 20.1%sy, 12.2%ni, 54.6%id, 0.0%wa, 1.3%hi,
> 11.8%si, 0.0%st
> >> Cpu1 : 11.9%us, 18.0%sy, 0.0%ni, 52.2%id, 0.0%wa, 2.9%hi,
> 15.1%si, 0.0%st
> >> Cpu2 : 6.2%us, 16.7%sy, 0.0%ni, 20.7%id, 0.0%wa, 3.3%hi,
> 53.3%si, 0.0%st
> >> Cpu3 : 12.7%us, 18.0%sy, 0.0%ni, 57.6%id, 0.0%wa, 2.5%hi,
> 9.2%si, 0.0%st
> >> Cpu4 : 13.0%us, 20.6%sy, 0.0%ni, 51.3%id, 0.0%wa, 3.2%hi,
> 11.9%si, 0.0%st
> >> Cpu5 : 11.8%us, 19.3%sy, 0.0%ni, 51.4%id, 0.0%wa, 2.5%hi,
> 15.0%si, 0.0%st
> >> Cpu6 : 10.0%us, 15.3%sy, 0.0%ni, 57.7%id, 0.0%wa, 2.1%hi,
> 14.9%si, 0.0%st
> >> Cpu7 : 15.3%us, 27.8%sy, 0.0%ni, 40.9%id, 0.0%wa, 2.5%hi,
> 13.5%si, 0.0%st
> >> Mem: 20775960k total, 1003940k used, 19772020k free, 97688k
> buffers
> >> Swap: 5177340k total, 0k used, 5177340k free, 540524k
> cached
> >>
> >> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> >> 6783 root 18 -2 337m 61m 3376 R 43.2 0.3 0:09.74
> AFPacketp115p38
> >> 6780 root 18 -2 337m 61m 3376 R 35.6 0.3 0:07.61
> AFPacketp115p35
> >> 6779 root 18 -2 337m 61m 3376 R 32.2 0.3 0:06.96
> AFPacketp115p34
> >> 6781 root 18 -2 337m 61m 3376 R 32.2 0.3 0:06.95
> AFPacketp115p36
> >> 6777 root 20 0 337m 61m 3376 R 31.3 0.3 0:06.72
> AFPacketp115p32
> >> 6776 root 22 2 337m 61m 3376 R 29.9 0.3 0:08.12
> AFPacketp115p31
> >> 6782 root 18 -2 337m 61m 3376 R 26.6 0.3 0:05.67
> AFPacketp115p37
> >> 6778 root 20 0 337m 61m 3376 R 24.3 0.3 0:05.21
> AFPacketp115p33
> >> 6767 root 20 0 337m 61m 3376 S 0.7 0.3 0:00.07
> Suricata-Main
> >> 6784 root 22 2 337m 61m 3376 S 0.7 0.3 0:00.12
> FlowManagerThre
> >>
> >> but, when I send traffic to both interfaces, drop rate for each
> interface is almost 55% ! each interface has 8 threads.
> >> and top -H report is:
> >>
> >> Cpu0 : 1.0%us, 24.7%sy, 49.8%ni, 6.7%id, 0.0%wa, 2.0%hi,
> 15.7%si, 0.0%st
> >> Cpu1 : 50.7%us, 24.2%sy, 0.3%ni, 7.4%id, 0.0%wa, 2.0%hi,
> 15.4%si, 0.0%st
> >> Cpu2 : 43.0%us, 19.5%sy, 0.0%ni, 1.0%id, 0.0%wa, 1.7%hi,
> 34.9%si, 0.0%st
> >> Cpu3 : 59.4%us, 21.8%sy, 0.0%ni, 8.1%id, 0.0%wa, 1.7%hi,
> 9.1%si, 0.0%st
> >> Cpu4 : 56.3%us, 23.0%sy, 0.0%ni, 7.7%id, 0.0%wa, 1.7%hi,
> 11.3%si, 0.0%st
> >> Cpu5 : 53.7%us, 23.8%sy, 0.0%ni, 6.4%id, 0.0%wa, 1.7%hi,
> 14.4%si, 0.0%st
> >> Cpu6 : 52.3%us, 23.2%sy, 0.0%ni, 8.1%id, 0.0%wa, 2.0%hi,
> 14.4%si, 0.0%st
> >> Cpu7 : 54.5%us, 23.6%sy, 0.0%ni, 7.1%id, 0.0%wa, 2.0%hi,
> 12.8%si, 0.0%st
> >> Mem: 20775960k total, 1014884k used, 19761076k free, 97844k
> buffers
> >> Swap: 5177340k total, 0k used, 5177340k free, 541212k
> cached
> >>
> >> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> >> 6780 root 18 -2 337m 70m 3376 R 39.5 0.3 1:47.93
> AFPacketp115p35
> >> 6771 root 18 -2 337m 70m 3376 R 39.2 0.3 0:09.97
> AFPacketp115p44
> >> 6772 root 18 -2 337m 70m 3376 R 38.9 0.3 0:10.02
> AFPacketp115p45
> >> 6783 root 18 -2 337m 70m 3376 R 38.9 0.3 2:11.36
> AFPacketp115p38
> >> 6779 root 18 -2 337m 70m 3376 R 38.6 0.3 1:40.22
> AFPacketp115p34
> >> 6773 root 18 -2 337m 70m 3376 R 38.2 0.3 0:09.65
> AFPacketp115p46
> >> 6775 root 18 -2 337m 70m 3376 R 38.2 0.3 0:10.48
> AFPacketp115p48
> >> 6781 root 18 -2 337m 70m 3376 R 38.2 0.3 1:39.22
> AFPacketp115p36
> >> 6774 root 18 -2 337m 70m 3376 R 37.6 0.3 0:09.20
> AFPacketp115p47
> >> 6782 root 18 -2 337m 70m 3376 R 37.2 0.3 1:22.00
> AFPacketp115p37
> >> 6768 root 22 2 337m 70m 3376 R 36.2 0.3 0:09.99
> AFPacketp115p41
> >> 6776 root 22 2 337m 70m 3376 R 36.2 0.3 1:33.93
> AFPacketp115p31
> >> 6769 root 20 0 337m 70m 3376 R 35.9 0.3 0:09.28
> AFPacketp115p42
> >> 6777 root 20 0 337m 70m 3376 R 35.9 0.3 1:36.01
> AFPacketp115p32
> >> 6770 root 20 0 337m 70m 3376 R 30.9 0.3 0:07.85
> AFPacketp115p43
> >> 6778 root 20 0 337m 70m 3376 R 30.6 0.3 1:17.34
> AFPacketp115p33
> >>
> >> what is problem?!
> >>
> >>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-devel/attachments/20140716/86456077/attachment-0002.html>
More information about the Oisf-devel
mailing list