<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body dir="auto"><div><br></div><div>On 16 jul 2014, at 11:10, Mahnaz Talebi <<a href="mailto:mhnz.talebi@gmail.com">mhnz.talebi@gmail.com</a>> wrote:<br><br></div><blockquote type="cite"><div><div dir="ltr"><div>Hi & Thanks for your replays. <br></div><div>I find that all tx interrupts handle by one cpu and smp_affinity only influence on receive packets by different cpus. <br></div><div>I examine different cluster-type and runmode compositions but the results were not much different. <br>
</div><div>I use this command for run suricata: <br> suricata -c /etc/suricata/suricata.yaml --af-packet<br></div><div><br></div></div></div></blockquote><div><br></div><div><br></div><div>So how does your af-packet section in suricata.yaml look like?</div><br><blockquote type="cite"><div><div dir="ltr"><div>I use igb driver, active RSS and use /proc/irq/irq#/smp_affinity for setting interrupt affinity. <br>
</div><div dir="rtl"><br></div><div>Each tx/rx peer has same interrupt number: <br><br><font size="1">cat /proc/interrupts | grep p115p3</font><br><font size="1">116: 1 0 0 0 0 0 0 0 PCI-MSI-edge p115p3<br>
117: 430880 0 0 0 0 0 0 0 PCI-MSI-edge p115p3-TxRx-0<br>118: 52 400648 0 0 0 0 0 0 PCI-MSI-edge p115p3-TxRx-1<br>
119: 52 0 395915 0 0 0 0 0 PCI-MSI-edge p115p3-TxRx-2<br>120: 63 0 0 376400 0 0 0 0 PCI-MSI-edge p115p3-TxRx-3<br>
121: 58 0 0 0 384137 0 0 0 PCI-MSI-edge p115p3-TxRx-4<br>122: 52 0 0 0 0 393522 0 0 PCI-MSI-edge p115p3-TxRx-5<br>
123: 52 0 0 0 0 0 394861 0 PCI-MSI-edge p115p3-TxRx-6<br>124: 52 0 0 0 0 0 0 389069 PCI-MSI-edge p115p3-TxRx-7</font><br>
<br></div><div>I set smp_affinity for two devices that peer together, but for sender device (p115p4) I have: (receiver device is p115p3)<br><font size="1"> cat /proc/interrupts | grep p115p4<br>125: 1 0 0 0 0 0 0 0 PCI-MSI-edge p115p4<br>
126: 1165 0 0 0 0 0 0 0 PCI-MSI-edge p115p4-TxRx-0<br>127: 51 1099 0 0 0 0 0 0 PCI-MSI-edge p115p4-TxRx-1<br>
128: 51 0 1099 0 0 0 0 0 PCI-MSI-edge p115p4-TxRx-2<br>129: 53 0 0 1099 0 0 0 0 PCI-MSI-edge p115p4-TxRx-3<br>
130: 57 0 0 0 1099 0 0 0 PCI-MSI-edge p115p4-TxRx-4<br><span style="color:rgb(204,0,0)">131: 51 0 0 0 0 1220633 0 0 PCI-MSI-edge </span> p115p4-TxRx-5<br>
132: 58 0 0 0 0 0 1099 0 PCI-MSI-edge p115p4-TxRx-6<br>133: 51 0 0 0 0 0 0 1099 PCI-MSI-edge p115p4-TxRx-7</font><br>
<br><br>> Hello Mahnaz,<br>
><br>
> have you tried changing the cluster_type to cluster_flow ?<br>
> I'm not sure this can help but maybe it worth to check<br>
><br>
> best regards<br>
> vito<br>
><br>
> On 07/15/2014 12:04 PM, Mahnaz Talebi wrote:<br>
>> Is there anyone who can help me to solve this problem?<br>
<br>
>What does your af-packet section in suricata.yaml look like?<br>
>Do you have affinity set up for both sniffing interfaces?<br>
>How do you start Suricata (command line)?<br>
<br>
<br>
>><br>
>><br>
>> On Tue, Jul 8, 2014 at 5:17 PM, Mahnaz Talebi <<a href="mailto:mhnz.talebi@gmail.com">mhnz.talebi@gmail.com</a> <mailto:<a href="mailto:mhnz.talebi@gmail.com">mhnz.talebi@gmail.com</a>><div id=":m5" class="">
> wrote:<br>
>><br>
>> Hi all,<br>
>><br>
>><br>
>> I am trying to evalute suricata's behavior, when sending
traffic for two interface that peer together in af-packet mode. I use
tcpreplay for sending traffic to these interfaces with rate 950Mbps.<br>
>> I use RSS & smp_affinity for distribute flows between
cpus and use workers runmode and cluster-cpu as cluster-type in
af-packet mode.<br>
>> when I send traffic for one of peered interfaces(p115p3), drop rate is 0%, and top -H report is :<br>
>><br>
>> Cpu0 : 0.0%us, 20.1%sy, 12.2%ni, 54.6%id, 0.0%wa, 1.3%hi, 11.8%si, 0.0%st<br>
>> Cpu1 : 11.9%us, 18.0%sy, 0.0%ni, 52.2%id, 0.0%wa, 2.9%hi, 15.1%si, 0.0%st<br>
>> Cpu2 : 6.2%us, 16.7%sy, 0.0%ni, 20.7%id, 0.0%wa, 3.3%hi, 53.3%si, 0.0%st<br>
>> Cpu3 : 12.7%us, 18.0%sy, 0.0%ni, 57.6%id, 0.0%wa, 2.5%hi, 9.2%si, 0.0%st<br>
>> Cpu4 : 13.0%us, 20.6%sy, 0.0%ni, 51.3%id, 0.0%wa, 3.2%hi, 11.9%si, 0.0%st<br>
>> Cpu5 : 11.8%us, 19.3%sy, 0.0%ni, 51.4%id, 0.0%wa, 2.5%hi, 15.0%si, 0.0%st<br>
>> Cpu6 : 10.0%us, 15.3%sy, 0.0%ni, 57.7%id, 0.0%wa, 2.1%hi, 14.9%si, 0.0%st<br>
>> Cpu7 : 15.3%us, 27.8%sy, 0.0%ni, 40.9%id, 0.0%wa, 2.5%hi, 13.5%si, 0.0%st<br>
>> Mem: 20775960k total, 1003940k used, 19772020k free, 97688k buffers<br>
>> Swap: 5177340k total, 0k used, 5177340k free, 540524k cached<br>
>><br>
>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND<br>
>> 6783 root 18 -2 337m 61m 3376 R 43.2 0.3 0:09.74 AFPacketp115p38<br>
>> 6780 root 18 -2 337m 61m 3376 R 35.6 0.3 0:07.61 AFPacketp115p35<br>
>> 6779 root 18 -2 337m 61m 3376 R 32.2 0.3 0:06.96 AFPacketp115p34<br>
>> 6781 root 18 -2 337m 61m 3376 R 32.2 0.3 0:06.95 AFPacketp115p36<br>
>> 6777 root 20 0 337m 61m 3376 R 31.3 0.3 0:06.72 AFPacketp115p32<br>
>> 6776 root 22 2 337m 61m 3376 R 29.9 0.3 0:08.12 AFPacketp115p31<br>
>> 6782 root 18 -2 337m 61m 3376 R 26.6 0.3 0:05.67 AFPacketp115p37<br>
>> 6778 root 20 0 337m 61m 3376 R 24.3 0.3 0:05.21 AFPacketp115p33<br>
>> 6767 root 20 0 337m 61m 3376 S 0.7 0.3 0:00.07 Suricata-Main<br>
>> 6784 root 22 2 337m 61m 3376 S 0.7 0.3 0:00.12 FlowManagerThre<br>
>><br>
>> but, when I send traffic to both interfaces, drop rate for
each interface is almost 55% ! each interface has 8 threads.<br>
>> and top -H report is:<br>
>><br>
>> Cpu0 : 1.0%us, 24.7%sy, 49.8%ni, 6.7%id, 0.0%wa, 2.0%hi, 15.7%si, 0.0%st<br>
>> Cpu1 : 50.7%us, 24.2%sy, 0.3%ni, 7.4%id, 0.0%wa, 2.0%hi, 15.4%si, 0.0%st<br>
>> Cpu2 : 43.0%us, 19.5%sy, 0.0%ni, 1.0%id, 0.0%wa, 1.7%hi, 34.9%si, 0.0%st<br>
>> Cpu3 : 59.4%us, 21.8%sy, 0.0%ni, 8.1%id, 0.0%wa, 1.7%hi, 9.1%si, 0.0%st<br>
>> Cpu4 : 56.3%us, 23.0%sy, 0.0%ni, 7.7%id, 0.0%wa, 1.7%hi, 11.3%si, 0.0%st<br>
>> Cpu5 : 53.7%us, 23.8%sy, 0.0%ni, 6.4%id, 0.0%wa, 1.7%hi, 14.4%si, 0.0%st<br>
>> Cpu6 : 52.3%us, 23.2%sy, 0.0%ni, 8.1%id, 0.0%wa, 2.0%hi, 14.4%si, 0.0%st<br>
>> Cpu7 : 54.5%us, 23.6%sy, 0.0%ni, 7.1%id, 0.0%wa, 2.0%hi, 12.8%si, 0.0%st<br>
>> Mem: 20775960k total, 1014884k used, 19761076k free, 97844k buffers<br>
>> Swap: 5177340k total, 0k used, 5177340k free, 541212k cached<br>
>><br>
>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND<br>
>> 6780 root 18 -2 337m 70m 3376 R 39.5 0.3 1:47.93 AFPacketp115p35<br>
>> 6771 root 18 -2 337m 70m 3376 R 39.2 0.3 0:09.97 AFPacketp115p44<br>
>> 6772 root 18 -2 337m 70m 3376 R 38.9 0.3 0:10.02 AFPacketp115p45<br>
>> 6783 root 18 -2 337m 70m 3376 R 38.9 0.3 2:11.36 AFPacketp115p38<br>
>> 6779 root 18 -2 337m 70m 3376 R 38.6 0.3 1:40.22 AFPacketp115p34<br>
>> 6773 root 18 -2 337m 70m 3376 R 38.2 0.3 0:09.65 AFPacketp115p46<br>
>> 6775 root 18 -2 337m 70m 3376 R 38.2 0.3 0:10.48 AFPacketp115p48<br>
>> 6781 root 18 -2 337m 70m 3376 R 38.2 0.3 1:39.22 AFPacketp115p36<br>
>> 6774 root 18 -2 337m 70m 3376 R 37.6 0.3 0:09.20 AFPacketp115p47<br>
>> 6782 root 18 -2 337m 70m 3376 R 37.2 0.3 1:22.00 AFPacketp115p37<br>
>> 6768 root 22 2 337m 70m 3376 R 36.2 0.3 0:09.99 AFPacketp115p41<br>
>> 6776 root 22 2 337m 70m 3376 R 36.2 0.3 1:33.93 AFPacketp115p31<br>
>> 6769 root 20 0 337m 70m 3376 R 35.9 0.3 0:09.28 AFPacketp115p42<br>
>> 6777 root 20 0 337m 70m 3376 R 35.9 0.3 1:36.01 AFPacketp115p32<br>
>> 6770 root 20 0 337m 70m 3376 R 30.9 0.3 0:07.85 AFPacketp115p43<br>
>> 6778 root 20 0 337m 70m 3376 R 30.6 0.3 1:17.34 AFPacketp115p33<br>
>><br>
>> what is problem?!<br>
>><br>
>></div></div></div>
</div></blockquote><blockquote type="cite"><div><span>_______________________________________________</span><br><span>Suricata IDS Devel mailing list: <a href="mailto:oisf-devel@openinfosecfoundation.org">oisf-devel@openinfosecfoundation.org</a></span><br><span>Site: <a href="http://suricata-ids.org">http://suricata-ids.org</a> | Participate: <a href="http://suricata-ids.org/participate/">http://suricata-ids.org/participate/</a></span><br><span>List: <a href="https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-devel">https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-devel</a></span><br><span>Redmine: <a href="https://redmine.openinfosecfoundation.org/">https://redmine.openinfosecfoundation.org/</a></span></div></blockquote></body></html>