<div dir="ltr"><div>Off course. Attached is one record of stats.log.<br>It looks like there is only one thread per NIC doing (almost) all the job:<br><br>top - 15:36:56 up 16 min,  2 users,  load average: 2.22, 2.71, 2.63<br>
Tasks: 277 total,   3 running, 274 sleeping,   0 stopped,   0 zombie<br>Cpu(s):  3.4%us,  0.0%sy,  1.3%ni, 93.7%id,  0.0%wa,  0.0%hi,  1.6%si,  0.0%st<br>Mem:  198002932k total, 59683604k used, 138319328k free,    30632k buffers<br>
Swap: 15624188k total,        0k used, 15624188k free,   296972k cached<br><br>  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                            <br>
<b> 2731 root      18  -2 55.8g  53g  51g R 99.9 28.6   8:05.85 AFPacketeth71                                                                      <br> 2715 root      22   2 55.8g  53g  51g R 84.6 28.6   6:49.80 AFPacketeth51   </b>                                                                   <br>
 2747 root      20   0 55.8g  53g  51g S 15.9 28.6   1:25.74 FlowManagerThre                                                                    <br> 2558 root      20   0  102m 6728 1232 S  0.5  0.0   0:01.76 barnyard2                                                                          <br>
 2740 root      18  -2 55.8g  53g  51g S  0.5 28.6   0:00.90 AFPacketeth710                                                                     <br>    1 root      20   0 24460 2340 1352 S  0.0  0.0   0:03.91 init                                                                               <br>
    2 root      20   0     0    0    0 S  0.0  0.0   0:00.00 kthreadd                                                                           <br>    3 root      20   0     0    0    0 S  0.0  0.0   0:03.22 ksoftirqd/0                                                                        <br>
    6 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 migration/0                                                                        <br>    7 root      RT   0     0    0    0 S  0.0  0.0   0:00.07 watchdog/0           <br>
<br></div>Thanks a lot!<br></div><div class="gmail_extra"><br><br><div class="gmail_quote">2013/6/3 Victor Julien <span dir="ltr"><<a href="mailto:lists@inliniac.net" target="_blank">lists@inliniac.net</a>></span><br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">On 06/03/2013 07:52 PM, Fernando Sclavo wrote:<br>
> We set "cluster-type: cluster_cpu" as suggested and CPU load lowered<br>
> from 30% (average) to 5%!! But, the unbalance is still there. Also the<br>
> UDP traffic is balanced now (sudo ethtool -N eth7 rx-flow-hash udp4 sdfn).<br>
><br>
>   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+<br>
> COMMAND<br>
><br>
>  2299 root      22   2 55.8g  53g  51g R 80.1 28.6   5:12.04<br>
> AFPacketeth51<br>
><br>
>  2331 root      20   0 55.8g  53g  51g R 19.9 28.6   1:19.97<br>
> FlowManagerThre<br>
><br>
>  2324 root      18  -2 55.8g  53g  51g S 16.4 28.6   1:13.06<br>
> AFPacketeth710<br>
><br>
>  2315 root      18  -2 55.8g  53g  51g S 11.9 28.6   0:49.75<br>
> AFPacketeth71<br>
><br>
>  2328 root      18  -2 55.8g  53g  51g S 11.9 28.6   0:55.22<br>
> AFPacketeth714<br>
><br>
>  2316 root      18  -2 55.8g  53g  51g S 10.9 28.6   0:54.53<br>
> AFPacketeth72<br>
><br>
>  2326 root      18  -2 55.8g  53g  51g S 10.9 28.6   0:45.33<br>
> AFPacketeth712<br>
><br>
>  2317 root      18  -2 55.8g  53g  51g S 10.4 28.6   0:38.21<br>
> AFPacketeth73<br>
><br>
>  2323 root      18  -2 55.8g  53g  51g S  9.9 28.6   0:44.72<br>
> AFPacketeth79<br>
><br>
><br>
> Dropped kernel packets:<br>
><br>
> capture.kernel_drops      | AFPacketeth51             | 449774742<br>
> capture.kernel_drops      | AFPacketeth52             | 48573<br>
> capture.kernel_drops      | AFPacketeth53             | 104763<br>
> capture.kernel_drops      | AFPacketeth54             | 108080<br>
> capture.kernel_drops      | AFPacketeth55             | 95763<br>
> capture.kernel_drops      | AFPacketeth56             | 105133<br>
> capture.kernel_drops      | AFPacketeth57             | 103984<br>
> capture.kernel_drops      | AFPacketeth58             | 100208<br>
> capture.kernel_drops      | AFPacketeth59             | 86704<br>
> capture.kernel_drops      | AFPacketeth510            | 95995<br>
> capture.kernel_drops      | AFPacketeth511            | 89633<br>
> capture.kernel_drops      | AFPacketeth512            | 94029<br>
> capture.kernel_drops      | AFPacketeth513            | 95192<br>
> capture.kernel_drops      | AFPacketeth514            | 106460<br>
> capture.kernel_drops      | AFPacketeth515            | 109770<br>
> capture.kernel_drops      | AFPacketeth516            | 108373<br>
<br>
</div></div>Can you share a full record from the stats.log?<br>
<br>
Cheers,<br>
Victor<br>
<div><div class="h5"><br>
><br>
> idsuser@suricata:/var/log/suricata$ cat /etc/rc.local<br>
> #!/bin/sh -e<br>
> #<br>
> # rc.local<br>
> #<br>
> # This script is executed at the end of each multiuser runlevel.<br>
> # Make sure that the script will "exit 0" on success or any other<br>
> # value on error.<br>
> #<br>
> # In order to enable or disable this script just change the execution<br>
> # bits.<br>
> #<br>
> # By default this script does nothing.<br>
><br>
> sudo sysctl -w net.core.rmem_max=536870912<br>
> sudo sysctl -w net.core.wmem_max=67108864<br>
> sudo sysctl -w net.ipv4.tcp_window_scaling=1<br>
> sudo sysctl -w net.core.netdev_max_backlog=1000000<br>
><br>
> # Seteo tamaño de MMRBC en bus en 4K<br>
> sudo setpci -d 8086:10fb e6.b=2e<br>
><br>
> # sudo sysctl -w net.ipv4.tcp_rmem="4096 87380 67108864"<br>
> # sudo sysctl -w net.ipv4.tcp_wmem="4096 87380 67108864"<br>
><br>
> sleep 2<br>
> sudo rmmod ixgbe<br>
> sleep 2<br>
> sudo insmod<br>
> /lib/modules/3.2.0-45-generic/kernel/drivers/net/ethernet/intel/ixgbe/ixgbe.ko<br>
> FdirPballoc=3,3,3,3 RSS=16,16,16,16 DCA=2,2,2,2<br>
> sleep 2<br>
><br>
> # Seteo ring size<br>
> # sudo ethtool -G eth4 rx 4096<br>
> sudo ethtool -G eth5 rx 4096<br>
> # sudo ethtool -G eth6 rx 4096<br>
> sudo ethtool -G eth7 rx 4096<br>
><br>
> # Balanceo de carga de flows UDP<br>
> # sudo ethtool -N eth4 rx-flow-hash udp4 sdfn<br>
> sudo ethtool -N eth5 rx-flow-hash udp4 sdfn<br>
> # sudo ethtool -N eth6 rx-flow-hash udp4 sdfn<br>
> sudo ethtool -N eth7 rx-flow-hash udp4 sdfn<br>
><br>
> sleep 2<br>
> sudo ksh /home/idsuser/ixgbe-3.14.5/scripts/set_irq_affinity eth4 eth5<br>
> eth6 eth7<br>
> sleep 2<br>
> # sudo ifconfig eth4 up && sleep 1<br>
> sudo ifconfig eth5 up && sleep 1<br>
> # sudo ifconfig eth6 up && sleep 1<br>
> sudo ifconfig eth7 up && sleep 1<br>
> sleep 5<br>
> sudo suricata -D -c /etc/suricata/suricata.yaml --af-packet<br>
> sleep 10<br>
> sudo barnyard2 -c /etc/suricata/barnyard2.conf -d /var/log/suricata -f<br>
> unified2.alert -w /var/log/suricata/suricata.waldo -D<br>
> exit 0<br>
><br>
><br>
><br>
</div></div>> 2013/6/3 Fernando Sclavo <<a href="mailto:fsclavo@gmail.com">fsclavo@gmail.com</a> <mailto:<a href="mailto:fsclavo@gmail.com">fsclavo@gmail.com</a>>><br>
<div class="im">><br>
>     Correction to previous email: runmode IS set to workers<br>
><br>
><br>
</div>>     2013/6/3 Fernando Sclavo <<a href="mailto:fsclavo@gmail.com">fsclavo@gmail.com</a> <mailto:<a href="mailto:fsclavo@gmail.com">fsclavo@gmail.com</a>>><br>
<div class="im">><br>
>         Hi Peter/Eric, I will try "flow per cpu" and mail the results.<br>
>         Same to "workers", but if I don't mistake we has tried but CPU<br>
>         usage was very high.<br>
><br>
><br>
>         Queues and IRQ affinity: each NIC has 16 queues, with IRQ<br>
>         assigned to one core each one (Intel driver script), and<br>
>         Suricata has CPU affinity enabled, confirmed that each thread<br>
>         keeps in their own core.<br>
><br>
><br>
><br>
</div>>         2013/6/3 Eric Leblond <<a href="mailto:eric@regit.org">eric@regit.org</a> <mailto:<a href="mailto:eric@regit.org">eric@regit.org</a>>><br>
<div class="im">><br>
>             Hi,<br>
><br>
>             Le lundi 03 juin 2013 à 15:54 +0200, Peter Manev a écrit :<br>
>             ><br>
>             ><br>
>             ><br>
>             > On Mon, Jun 3, 2013 at 3:34 PM, Fernando Sclavo<br>
</div>>             <<a href="mailto:fsclavo@gmail.com">fsclavo@gmail.com</a> <mailto:<a href="mailto:fsclavo@gmail.com">fsclavo@gmail.com</a>>><br>
<div><div class="h5">>             > wrote:<br>
>             >         Hi all!<br>
>             >         We are running Suricata 1.4.2 with two Intel x520<br>
>             cards,<br>
>             >         connected each one to the core switches on our<br>
>             datacenter<br>
>             >         network. The average traffic is about 1~2Gbps per<br>
>             port.<br>
>             >         As you can see on the following top output, there<br>
>             are some<br>
>             >         threads significantly more loaded than others<br>
>             (AFPacketeth54<br>
>             >         for example): these threads are continuously<br>
>             dropping kernel<br>
>             >         packets. We raised kernel parameters (buffers and<br>
>             rmem, etc)<br>
>             >         and lowered suricata timeouts flows to just a few<br>
>             seconds, but<br>
>             >         we can't keep drops counter static when CPU goes<br>
>             to 99.9% for<br>
>             >         a specific thread.<br>
>             >         How can we do to balance the load better on all<br>
>             threads to<br>
>             >         prevent this issue?<br>
>             ><br>
>             >         The server is a Dell R715 2x16 core AMD<br>
>             Opteron(tm) Processor<br>
>             >         6284, 192Gb RAM.<br>
>             ><br>
>             >         idsuser@suricata:~$ top -d2<br>
>             ><br>
>             >         top - 10:24:05 up 1 min,  2 users,  load average:<br>
>             4.49, 1.14,<br>
>             >         0.38<br>
>             >         Tasks: 287 total,  15 running, 272 sleeping,   0<br>
>             stopped,   0<br>
>             >         zombie<br>
>             >         Cpu(s): 30.3%us,  1.3%sy,  0.0%ni, 65.3%id,<br>
>              0.0%wa,  0.0%hi,<br>
>             >         3.1%si,  0.0%st<br>
>             >         Mem:  198002932k total, 59619020k used, 138383912k<br>
>             free,<br>
>             >         25644k buffers<br>
>             >         Swap: 15624188k total,        0k used, 15624188k free,<br>
>             >         161068k cached<br>
>             ><br>
>             >           PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM<br>
>                TIME+<br>
>             >         COMMAND<br>
>             >          2309 root      18  -2 55.8g  54g  51g R 99.9 28.6<br>
>               0:20.96<br>
>             >         AFPacketeth54<br>
>             >          2314 root      18  -2 55.8g  54g  51g R 99.9 28.6<br>
>               0:18.29<br>
>             >         AFPacketeth59<br>
>             >          2318 root      18  -2 55.8g  54g  51g R 99.9 28.6<br>
>               0:12.90<br>
>             >         AFPacketeth513<br>
>             >          2319 root      18  -2 55.8g  54g  51g R 77.6 28.6<br>
>               0:12.78<br>
>             >         AFPacketeth514<br>
>             >          2307 root      20   0 55.8g  54g  51g S 66.6 28.6<br>
>               0:21.25<br>
>             >         AFPacketeth52<br>
>             >          2338 root      20   0 55.8g  54g  51g R 58.2 28.6<br>
>               0:09.94<br>
>             >         FlowManagerThre<br>
>             >          2310 root      18  -2 55.8g  54g  51g S 51.2 28.6<br>
>               0:15.35<br>
>             >         AFPacketeth55<br>
>             >          2320 root      18  -2 55.8g  54g  51g R 50.2 28.6<br>
>               0:07.83<br>
>             >         AFPacketeth515<br>
>             >          2313 root      18  -2 55.8g  54g  51g S 48.7 28.6<br>
>               0:11.66<br>
>             >         AFPacketeth58<br>
>             >          2321 root      18  -2 55.8g  54g  51g S 47.7 28.6<br>
>               0:07.75<br>
>             >         AFPacketeth516<br>
>             >          2315 root      18  -2 55.8g  54g  51g R 45.2 28.6<br>
>               0:12.18<br>
>             >         AFPacketeth510<br>
>             >          2306 root      22   2 55.8g  54g  51g R 37.3 28.6<br>
>               0:12.32<br>
>             >         AFPacketeth51<br>
>             >          2312 root      18  -2 55.8g  54g  51g S 35.8 28.6<br>
>               0:11.90<br>
>             >         AFPacketeth57<br>
>             >          2308 root      20   0 55.8g  54g  51g R 34.8 28.6<br>
>               0:16.69<br>
>             >         AFPacketeth53<br>
>             >          2317 root      18  -2 55.8g  54g  51g R 33.3 28.6<br>
>               0:07.93<br>
>             >         AFPacketeth512<br>
>             >          2316 root      18  -2 55.8g  54g  51g S 28.8 28.6<br>
>               0:08.03<br>
>             >         AFPacketeth511<br>
>             >          2311 root      18  -2 55.8g  54g  51g S 24.9 28.6<br>
>               0:10.51<br>
>             >         AFPacketeth56<br>
>             >          2331 root      18  -2 55.8g  54g  51g R 19.9 28.6<br>
>               0:02.41<br>
>             >         AFPacketeth710<br>
>             >          2323 root      18  -2 55.8g  54g  51g S 17.9 28.6<br>
>               0:03.60<br>
>             >         AFPacketeth72<br>
>             >          2336 root      18  -2 55.8g  54g  51g S 16.9 28.6<br>
>               0:01.50<br>
>             >         AFPacketeth715<br>
>             >          2333 root      18  -2 55.8g  54g  51g S 14.9 28.6<br>
>               0:02.14<br>
>             >         AFPacketeth712<br>
>             >          2330 root      18  -2 55.8g  54g  51g S 13.9 28.6<br>
>               0:02.12<br>
>             >         AFPacketeth79<br>
>             >          2324 root      18  -2 55.8g  54g  51g R 11.9 28.6<br>
>               0:02.96<br>
>             >         AFPacketeth73<br>
>             >          2329 root      18  -2 55.8g  54g  51g S 11.9 28.6<br>
>               0:01.90<br>
>             >         AFPacketeth78<br>
>             >          2335 root      18  -2 55.8g  54g  51g S 11.9 28.6<br>
>               0:01.44<br>
>             >         AFPacketeth714<br>
>             >          2334 root      18  -2 55.8g  54g  51g R 10.9 28.6<br>
>               0:01.68<br>
>             >         AFPacketeth713<br>
>             >          2325 root      18  -2 55.8g  54g  51g S  9.4 28.6<br>
>               0:02.38<br>
>             >         AFPacketeth74<br>
>             >          2326 root      18  -2 55.8g  54g  51g S  8.9 28.6<br>
>               0:02.71<br>
>             >         AFPacketeth75<br>
>             >          2327 root      18  -2 55.8g  54g  51g S  7.5 28.6<br>
>               0:01.98<br>
>             >         AFPacketeth76<br>
>             >          2332 root      18  -2 55.8g  54g  51g S  7.5 28.6<br>
>               0:01.53<br>
>             >         AFPacketeth711<br>
>             >          2337 root      18  -2 55.8g  54g  51g S  7.0 28.6<br>
>               0:01.09<br>
>             >         AFPacketeth716<br>
>             >          2328 root      18  -2 55.8g  54g  51g S  6.0 28.6<br>
>               0:02.11<br>
>             >         AFPacketeth77<br>
>             >          2322 root      18  -2 55.8g  54g  51g R  5.5 28.6<br>
>               0:03.78<br>
>             >         AFPacketeth71<br>
>             >             3 root      20   0     0    0    0 S  4.5  0.0<br>
>               0:01.25<br>
>             >         ksoftirqd/0<br>
>             >            11 root      20   0     0    0    0 S  0.5  0.0<br>
>               0:00.14<br>
>             >         kworker/0:1<br>
>             ><br>
>             >         Regards<br>
>             ><br>
>             ><br>
>             >         _______________________________________________<br>
>             >         Suricata IDS Users mailing list:<br>
>             >         <a href="mailto:oisf-users@openinfosecfoundation.org">oisf-users@openinfosecfoundation.org</a><br>
</div></div>>             <mailto:<a href="mailto:oisf-users@openinfosecfoundation.org">oisf-users@openinfosecfoundation.org</a>><br>
<div><div class="h5">>             >         Site: <a href="http://suricata-ids.org" target="_blank">http://suricata-ids.org</a> | Support:<br>
>             >         <a href="http://suricata-ids.org/support/" target="_blank">http://suricata-ids.org/support/</a><br>
>             >         List:<br>
>             ><br>
>             <a href="https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users" target="_blank">https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>
>             >         OISF: <a href="http://www.openinfosecfoundation.org/" target="_blank">http://www.openinfosecfoundation.org/</a><br>
>             ><br>
>             ><br>
>             > Hi,<br>
>             ><br>
>             ><br>
>             > You could try "runmode: workers".<br>
><br>
>             From thread name it seems it is already the case.<br>
><br>
>             ><br>
>             ><br>
>             > What is your flow balance method?<br>
>             ><br>
>             > Can you try "flow per cpu" in the yaml section of afpacket?<br>
>             > ("cluster-type: cluster_cpu")<br>
><br>
>             It could help indeed.<br>
><br>
>             A few questions:<br>
><br>
>             Are your IRQ affinity setting correct ? (meaning multiqueue<br>
>             used on the<br>
>             NICs and well balanced accross CPU ?)<br>
><br>
>             If you have a lot of UDP on your network use ethtool to load<br>
>             balance it<br>
>             as it is not done by default.<br>
><br>
>             BR,<br>
>             ><br>
>             ><br>
>             ><br>
>             ><br>
>             ><br>
>             > Thank you<br>
>             ><br>
>             ><br>
>             > --<br>
>             > Regards,<br>
>             > Peter Manev<br>
>             > _______________________________________________<br>
>             > Suricata IDS Users mailing list:<br>
>             <a href="mailto:oisf-users@openinfosecfoundation.org">oisf-users@openinfosecfoundation.org</a><br>
</div></div>>             <mailto:<a href="mailto:oisf-users@openinfosecfoundation.org">oisf-users@openinfosecfoundation.org</a>><br>
<div class="im HOEnZb">>             > Site: <a href="http://suricata-ids.org" target="_blank">http://suricata-ids.org</a> | Support:<br>
>             <a href="http://suricata-ids.org/support/" target="_blank">http://suricata-ids.org/support/</a><br>
>             > List:<br>
>             <a href="https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users" target="_blank">https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>
>             > OISF: <a href="http://www.openinfosecfoundation.org/" target="_blank">http://www.openinfosecfoundation.org/</a><br>
><br>
><br>
><br>
><br>
><br>
><br>
> _______________________________________________<br>
> Suricata IDS Users mailing list: <a href="mailto:oisf-users@openinfosecfoundation.org">oisf-users@openinfosecfoundation.org</a><br>
> Site: <a href="http://suricata-ids.org" target="_blank">http://suricata-ids.org</a> | Support: <a href="http://suricata-ids.org/support/" target="_blank">http://suricata-ids.org/support/</a><br>
> List: <a href="https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users" target="_blank">https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>
> OISF: <a href="http://www.openinfosecfoundation.org/" target="_blank">http://www.openinfosecfoundation.org/</a><br>
><br>
<br>
<br>
</div><span class="HOEnZb"><font color="#888888">--<br>
---------------------------------------------<br>
Victor Julien<br>
<a href="http://www.inliniac.net/" target="_blank">http://www.inliniac.net/</a><br>
PGP: <a href="http://www.inliniac.net/victorjulien.asc" target="_blank">http://www.inliniac.net/victorjulien.asc</a><br>
---------------------------------------------<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
_______________________________________________<br>
Suricata IDS Users mailing list: <a href="mailto:oisf-users@openinfosecfoundation.org">oisf-users@openinfosecfoundation.org</a><br>
Site: <a href="http://suricata-ids.org" target="_blank">http://suricata-ids.org</a> | Support: <a href="http://suricata-ids.org/support/" target="_blank">http://suricata-ids.org/support/</a><br>
List: <a href="https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users" target="_blank">https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>
OISF: <a href="http://www.openinfosecfoundation.org/" target="_blank">http://www.openinfosecfoundation.org/</a><br>
</div></div></blockquote></div><br></div>