<div dir="ltr">Yes, i have read the article <span style="font-family:arial,sans-serif;font-size:14px">Christophe</span> recommended, all my threads are used and every<span style="color:rgb(51,51,51);font-family:'Helvetica Neue Light',HelveticaNeue-Light,'Helvetica Neue',Helvetica,Arial,sans-serif;font-size:14px;line-height:19.600000381469727px;text-align:justify"> CPU core is receiving interrupts of the network card but just drop a lot, it seems that my problem is not caused by </span><span style="font-family:arial,sans-serif;font-size:14px">the lack of NIC queues. no matter what, thank Chritophe very much.</span><div>
<span style="font-family:arial,sans-serif;font-size:14px"><br></span></div><div><span style="font-family:arial,sans-serif;font-size:14px">what can be inferred from this record?</span></div><div><span style="font-family:arial,sans-serif;font-size:14px">11/6/2014 -- 16:58:29 - <Info> - Flow emergency mode over, back to</span><br style="font-family:arial,sans-serif;font-size:14px">
<span style="font-family:arial,sans-serif;font-size:14px">normal... unsetting FLOW_EMERGENCY bit (ts.tv_sec: 1402477082,</span><br style="font-family:arial,sans-serif;font-size:14px"><span style="font-family:arial,sans-serif;font-size:14px">ts.tv_usec:696562) flow_spare_q status(): 70% flows at the queue</span><span style="font-family:arial,sans-serif;font-size:14px"><br>
</span></div><div><span style="font-family:arial,sans-serif;font-size:14px"><br></span></div><div><span style="font-family:arial,sans-serif;font-size:14px">I did not disable irqbanlance before. i have disabled it and run the suricata for around 50 minutes this morning. here are </span><span style="font-family:arial,sans-serif;font-size:14px">the latest stats.log and suricata.log.</span></div>
<div><span style="font-family:arial,sans-serif;font-size:14px"><br></span></div><div><font face="arial, sans-serif"><span style="font-size:14px"><a href="https://drive.google.com/file/d/0B6V3lnZlrEKPM3JSYXpFZU5sTkE/edit?usp=sharing">https://drive.google.com/file/d/0B6V3lnZlrEKPM3JSYXpFZU5sTkE/edit?usp=sharing</a></span></font><br>
</div><div><font face="arial, sans-serif"><span style="font-size:14px"><a href="https://drive.google.com/file/d/0B6V3lnZlrEKPVDBRclBrZHB4VkU/edit?usp=sharing">https://drive.google.com/file/d/0B6V3lnZlrEKPVDBRclBrZHB4VkU/edit?usp=sharing</a></span><br>
</font></div><div><span style="font-family:arial,sans-serif;font-size:14px"><br></span></div><div><span style="font-family:arial,sans-serif;font-size:14px">thanx again.</span></div><div><span style="font-family:arial,sans-serif;font-size:14px">best wishes.</span></div>
<div><br></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">2014-06-12 2:25 GMT+08:00 Peter Manev <span dir="ltr"><<a href="mailto:petermanev@gmail.com" target="_blank">petermanev@gmail.com</a>></span>:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="">On Wed, Jun 11, 2014 at 3:32 PM, X.qing <<a href="mailto:xqing.summer@gmail.com">xqing.summer@gmail.com</a>> wrote:<br>
><br>
> Is there anything wrong with my suricata.log?<br>
> i forgot to tell that the suricata.log comes when i have changed the cluster_flow to cluster_cpu and 20 threads to 16threads because i found that it can not improve the performance even though more hard resource is used, which is different from the yalm file i sent before.<br>
> the rent of my drop is between 50%-60% under 2-4Gps traffic flow.<br>
><br>
> I am really confused these days. Greatly appreciate if you could offer any suggestions.<br>
><br>
> thank you all. (ಥ_ಥ)<br>
<br>
</div>Did you explore any of the suggestions form Christophe ?<br>
Have you disabled irqbalance ?<br>
<br>
Besides the dorps , I also see this in your suricata.log -<br>
11/6/2014 -- 16:58:29 - <Info> - Flow emergency mode over, back to<br>
normal... unsetting FLOW_EMERGENCY bit (ts.tv_sec: 1402477082,<br>
ts.tv_usec:696562) flow_spare_q status(): 70% flows at the queue<br>
<br>
Can you share your stats.log The last entries (last update/write in<br>
the log) - I assume it will be very long so please use pastebin or<br>
something similar.<br>
<br>
thanks<br>
<div class="im HOEnZb"><br>
><br>
><br>
><br>
><br>
> 2014-06-11 17:23 GMT+08:00 X.qing <<a href="mailto:xqing.summer@gmail.com">xqing.summer@gmail.com</a>>:<br>
><br>
>><br>
>><br>
>> Of course~ suricata.log in attachment.<br>
>><br>
</div><div class="HOEnZb"><div class="h5">>> (It is very nice of you.( ͡° ͜ʖ ͡°) )<br>
>><br>
>><br>
>> 2014-06-11 16:33 GMT+08:00 Peter Manev <<a href="mailto:petermanev@gmail.com">petermanev@gmail.com</a>>:<br>
>><br>
>>> On Sun, Jun 8, 2014 at 11:01 AM, Christophe Vandeplas<br>
>>> <<a href="mailto:christophe@vandeplas.com">christophe@vandeplas.com</a>> wrote:<br>
>>> > Hi,<br>
>>> ><br>
>>> ><br>
>>> > What kind of drop do you have?<br>
>>> > - capture.kernel_drops<br>
>>> > - tcp.segment_memcap_drop<br>
>>> > - tcp.ssn_memcap_drop<br>
>>> ><br>
>>> > Lower the number of threads in the af-packet section to the number of<br>
>>> > cores your system has. (cat /proc/cpuinfo | fgrep processor | wc -l )<br>
>>> ><br>
>>> > Run suricata with no rules, and tweak the configuration, you should<br>
>>> > have (almost) no packet drop before you activate rules.<br>
>>> ><br>
>>> > After having made changes in the yaml configuration file I usually:<br>
>>> > - stop suricata<br>
>>> > - empty the logfiles<br>
>>> > - start suricata<br>
>>> > This way there's no risk of looking at older logs and misinterpreting<br>
>>> > configuration changes.<br>
>>> ><br>
>>> ><br>
>>> > If possible, link your stats.log to a monitoring tool to greate<br>
>>> > graphs. This way you can correlate packet drops by suricata with other<br>
>>> > events on the system. I've written an article about this :<br>
>>> > <a href="http://christophe.vandeplas.com/2013/11/suricata-monitoring-with-zabbix-or-other.html" target="_blank">http://christophe.vandeplas.com/2013/11/suricata-monitoring-with-zabbix-or-other.html</a><br>
>>> > But also other scripts exist.<br>
>>> > Make sure you edit the suricata_stats.py script with the number of<br>
>>> > threads configured in suricata.yaml<br>
>>> ><br>
>>> ><br>
>>> > If your drops are capture.kernel_drops, then :<br>
>>> > Have you read this article?<br>
>>> > <a href="http://christophe.vandeplas.com/2013/11/suricata-capturekerneldrops-caused-by.html" target="_blank">http://christophe.vandeplas.com/2013/11/suricata-capturekerneldrops-caused-by.html</a><br>
>>> > Please do the first part "Confirmation of the problem" and see if you<br>
>>> > also have the problem caused by the lack of NIC queues.<br>
>>> > In a few words:<br>
>>> > - start suricata<br>
>>> > - as root, run "top -H" and check how many AFPacketethXX threads are<br>
>>> > generating load.<br>
>>> > - if it's only one thread, then the problem has been pinpointed.<br>
>>> > However working with cluster_flow should solve this problem. Make sure<br>
>>> > you read the rest of the article then.<br>
>>> ><br>
>>> ><br>
>>> > Kind regards<br>
>>> > Christophe<br>
>>> ><br>
>>> ><br>
>>><br>
>>><br>
>>> Can you share your suricata.log as well please?<br>
>>> What is the output of<br>
>>> ethtool -k your_interface<br>
>>> ?<br>
>><br>
>><br>
>><br>
><br>
<br>
<br>
<br>
</div></div><span class="HOEnZb"><font color="#888888">--<br>
Regards,<br>
Peter Manev<br>
</font></span></blockquote></div><br></div>