<div dir="ltr"><br><div class="gmail_extra"><br><br><div class="gmail_quote">On Mon, Jun 3, 2013 at 3:34 PM, Fernando Sclavo <span dir="ltr"><<a href="mailto:fsclavo@gmail.com" target="_blank">fsclavo@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi all!<br>We are running Suricata 1.4.2 with two Intel x520 cards, connected each one to the core switches on our datacenter network. The average traffic is about 1~2Gbps per port.<br>
As you can see on the following top output, there are some threads significantly more loaded than others (AFPacketeth54 for example): these threads are continuously dropping kernel packets. We raised kernel parameters (buffers and rmem, etc) and lowered suricata timeouts flows to just a few seconds, but we can't keep drops counter static when CPU goes to 99.9% for a specific thread.<br>
How can we do to balance the load better on all threads to prevent this issue?<br><br>The server is a Dell R715 2x16 core AMD Opteron(tm) Processor 6284, 192Gb RAM.<br><br>idsuser@suricata:~$ top -d2<br><br>top - 10:24:05 up 1 min, 2 users, load average: 4.49, 1.14, 0.38<br>
Tasks: 287 total, 15 running, 272 sleeping, 0 stopped, 0 zombie<br>Cpu(s): 30.3%us, 1.3%sy, 0.0%ni, 65.3%id, 0.0%wa, 0.0%hi, 3.1%si, 0.0%st<br>Mem: 198002932k total, 59619020k used, 138383912k free, 25644k buffers<br>
Swap: 15624188k total, 0k used, 15624188k free, 161068k cached<br><br> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND <br>
2309 root 18 -2 55.8g 54g 51g R 99.9 28.6 0:20.96 AFPacketeth54 <br> 2314 root 18 -2 55.8g 54g 51g R 99.9 28.6 0:18.29 AFPacketeth59 <br>
2318 root 18 -2 55.8g 54g 51g R 99.9 28.6 0:12.90 AFPacketeth513 <br> 2319 root 18 -2 55.8g 54g 51g R 77.6 28.6 0:12.78 AFPacketeth514 <br>
2307 root 20 0 55.8g 54g 51g S 66.6 28.6 0:21.25 AFPacketeth52 <br> 2338 root 20 0 55.8g 54g 51g R 58.2 28.6 0:09.94 FlowManagerThre <br>
2310 root 18 -2 55.8g 54g 51g S 51.2 28.6 0:15.35 AFPacketeth55 <br> 2320 root 18 -2 55.8g 54g 51g R 50.2 28.6 0:07.83 AFPacketeth515 <br>
2313 root 18 -2 55.8g 54g 51g S 48.7 28.6 0:11.66 AFPacketeth58 <br> 2321 root 18 -2 55.8g 54g 51g S 47.7 28.6 0:07.75 AFPacketeth516 <br>
2315 root 18 -2 55.8g 54g 51g R 45.2 28.6 0:12.18 AFPacketeth510 <br> 2306 root 22 2 55.8g 54g 51g R 37.3 28.6 0:12.32 AFPacketeth51 <br>
2312 root 18 -2 55.8g 54g 51g S 35.8 28.6 0:11.90 AFPacketeth57 <br> 2308 root 20 0 55.8g 54g 51g R 34.8 28.6 0:16.69 AFPacketeth53 <br>
2317 root 18 -2 55.8g 54g 51g R 33.3 28.6 0:07.93 AFPacketeth512 <br> 2316 root 18 -2 55.8g 54g 51g S 28.8 28.6 0:08.03 AFPacketeth511 <br>
2311 root 18 -2 55.8g 54g 51g S 24.9 28.6 0:10.51 AFPacketeth56 <br> 2331 root 18 -2 55.8g 54g 51g R 19.9 28.6 0:02.41 AFPacketeth710 <br>
2323 root 18 -2 55.8g 54g 51g S 17.9 28.6 0:03.60 AFPacketeth72 <br> 2336 root 18 -2 55.8g 54g 51g S 16.9 28.6 0:01.50 AFPacketeth715 <br>
2333 root 18 -2 55.8g 54g 51g S 14.9 28.6 0:02.14 AFPacketeth712 <br> 2330 root 18 -2 55.8g 54g 51g S 13.9 28.6 0:02.12 AFPacketeth79 <br>
2324 root 18 -2 55.8g 54g 51g R 11.9 28.6 0:02.96 AFPacketeth73 <br> 2329 root 18 -2 55.8g 54g 51g S 11.9 28.6 0:01.90 AFPacketeth78 <br>
2335 root 18 -2 55.8g 54g 51g S 11.9 28.6 0:01.44 AFPacketeth714 <br> 2334 root 18 -2 55.8g 54g 51g R 10.9 28.6 0:01.68 AFPacketeth713 <br>
2325 root 18 -2 55.8g 54g 51g S 9.4 28.6 0:02.38 AFPacketeth74 <br> 2326 root 18 -2 55.8g 54g 51g S 8.9 28.6 0:02.71 AFPacketeth75 <br>
2327 root 18 -2 55.8g 54g 51g S 7.5 28.6 0:01.98 AFPacketeth76 <br> 2332 root 18 -2 55.8g 54g 51g S 7.5 28.6 0:01.53 AFPacketeth711 <br>
2337 root 18 -2 55.8g 54g 51g S 7.0 28.6 0:01.09 AFPacketeth716 <br> 2328 root 18 -2 55.8g 54g 51g S 6.0 28.6 0:02.11 AFPacketeth77 <br>
2322 root 18 -2 55.8g 54g 51g R 5.5 28.6 0:03.78 AFPacketeth71 <br> 3 root 20 0 0 0 0 S 4.5 0.0 0:01.25 ksoftirqd/0 <br>
11 root 20 0 0 0 0 S 0.5 0.0 0:00.14 kworker/0:1 <br><br>Regards<br></div>
<br>_______________________________________________<br>
Suricata IDS Users mailing list: <a href="mailto:oisf-users@openinfosecfoundation.org">oisf-users@openinfosecfoundation.org</a><br>
Site: <a href="http://suricata-ids.org" target="_blank">http://suricata-ids.org</a> | Support: <a href="http://suricata-ids.org/support/" target="_blank">http://suricata-ids.org/support/</a><br>
List: <a href="https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users" target="_blank">https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>
OISF: <a href="http://www.openinfosecfoundation.org/" target="_blank">http://www.openinfosecfoundation.org/</a><br></blockquote></div><br></div><div class="gmail_extra">Hi,<br><br></div><div class="gmail_extra">You could try "runmode: workers".<br>
</div><div class="gmail_extra"><br></div><div class="gmail_extra">What is your flow balance method? <br></div><div class="gmail_extra">Can you try "flow per cpu" in the yaml section of afpacket? ("cluster-type: cluster_cpu")<br>
<br><br></div><div class="gmail_extra"><br></div><div class="gmail_extra">Thank you<br clear="all"></div><div class="gmail_extra"><br>-- <br><div>Regards,</div>
<div>Peter Manev</div>
</div></div>