<div dir="ltr"><div>The last section of stats.log after i ran suricata for about half an hours.(<span style="font-family:arial,sans-serif;font-size:14px">cluster_flow and 22 threads</span>)</div><a href="http://pastebin.com/RkT4UD6j">http://pastebin.com/RkT4UD6j</a><div>
<br></div><div>:)</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">2014-06-13 16:33 GMT+08:00 Peter Manev <span dir="ltr"><<a href="mailto:petermanev@gmail.com" target="_blank">petermanev@gmail.com</a>></span>:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">On Fri, Jun 13, 2014 at 10:24 AM, Peter Manev <<a href="mailto:petermanev@gmail.com">petermanev@gmail.com</a>> wrote:<br>
> On Thu, Jun 12, 2014 at 6:56 PM, Peter Manev <<a href="mailto:petermanev@gmail.com">petermanev@gmail.com</a>> wrote:<br>
>> On Thu, Jun 12, 2014 at 11:41 AM, X.qing <<a href="mailto:xqing.summer@gmail.com">xqing.summer@gmail.com</a>> wrote:<br>
>>> OK, i get it.<br>
>>> The latest stats.log <a href="http://pastebin.com/P81PKgFf" target="_blank">http://pastebin.com/P81PKgFf</a> after i diabled<br>
>>> vlan tracking.<br>
>><br>
>><br>
>> What is the output of<br>
>> ethtool -n eth3 rx-flow-hash udp6<br>
>> ethtool -n eth3 rx-flow-hash udp4<br>
>><br>
>> Disable those:<br>
>> midstream: true<br>
>> asyn-oneside: true<br>
>><br>
>> to<br>
>><br>
>> midstream: false<br>
>> asyn-oneside: false<br>
>><br>
>> What is the output of the first 5 lines of :<br>
>> tcpstat -i eth3 -o "Time:%S\tn=%n\tavg=%a\tstddev=%d\tbps=%b\n" 1<br>
>><br>
>> Try those settings for flow in suricata.yaml:<br>
>> flow:<br>
>> memcap: 4gb<br>
>> hash-size: 15728640<br>
>> prealloc: 8000000<br>
>> emergency-recovery: 30<br>
>><br>
>><br>
>> What is the output of :<br>
>> ethtool -g eth3<br>
>><br>
>> Make sure you use 16 threads in af packet<br>
>> and you have cluster-type: cluster_cpu<br>
>><br>
>> Change to:<br>
>> http:<br>
>> enabled: yes<br>
>> memcap: 4gb<br>
>><br>
>> also<br>
>><br>
>> dns:<br>
>> # memcaps. Globally and per flow/state.<br>
>> global-memcap: 4gb<br>
>> state-memcap: 512kb<br>
>><br>
>><br>
>><br>
>> I see that the majority of the packets are 240-250 byte size ... Just<br>
>> curious - what would be the reason for that?<br>
>><br>
>> Thanks<br>
>><br>
>><br>
>> --<br>
>> Regards,<br>
>> Peter Manev<br>
><br>
><br>
><br>
> X.qing -><br>
> ------------------------------------------------------------<br>
> ethtool -n eth3 rx-flow-hash udp6<br>
> UDP over IPV6 flows use these fields for computing Hash flow key:<br>
> IP SA<br>
> IP DA<br>
> L4 bytes 0 & 1 [TCP/UDP src port]<br>
> L4 bytes 2 & 3 [TCP/UDP dst port]<br>
><br>
> ethtool -n eth3 rx-flow-hash udp4<br>
> UDP over IPV4 flows use these fields for computing Hash flow key:<br>
> IP SA<br>
> IP DA<br>
> L4 bytes 0 & 1 [TCP/UDP src port]<br>
> L4 bytes 2 & 3 [TCP/UDP dst port]<br>
><br>
> tcpstat -i eth3 -o "Time:%S\tn=%n\tavg=%a\tstddev=%d\tbps=%b\n" 1<br>
> Time:1402638168 n=1233147 avg=243.74 stddev=389.33 bps=2404526776.00<br>
> Time:1402638169 n=1338878 avg=242.22 stddev=385.85 bps=2594470896.00<br>
> Time:1402638170 n=1337129 avg=241.71 stddev=386.80 bps=2585554264.00<br>
> Time:1402638171 n=1343252 avg=234.47 stddev=374.11 bps=2519645368.00<br>
> Time:1402638172 n=1404989 avg=237.95 stddev=378.84 bps=2674528040.00<br>
> Time:1402638173 n=1183470 avg=238.35 stddev=379.70 bps=2256653072.00<br>
><br>
> ethtool -g eth3<br>
> Ring parameters for eth3:<br>
> Pre-set maximums:<br>
> RX: 4096<br>
> RX Mini: 0<br>
> RX Jumbo: 0<br>
> TX: 4096<br>
> Current hardware settings:<br>
> RX: 4096<br>
> RX Mini: 0<br>
> RX Jumbo: 0<br>
> TX: 512<br>
><br>
> the system's performance had no improvement just according to the drop<br>
> rate after changing the yaml file .<br>
><br>
> the majority of the packets are 240-250 byte size is the feature of<br>
> the service the internet equipment offer.<br>
><br>
><br>
> thanks<br>
> best wishes :)<br>
> X.qing <-<br>
><br>
><br>
> --<br>
> Regards,<br>
> Peter Manev<br>
<br>
<br>
<br>
</div></div>Ok.<br>
So this is a case whre you have a lot of small packets - about 1,4 mil<br>
pps x ~~240 byte size (Just for comparison if the avg packet size is<br>
850 the traffic would be about 9Gbps)<br>
Then we have 2 options (i think)<br>
1 - You need better CPU speed (>2.0, preferrably >= 2.7 Ghz)<br>
2 - try with cluster_flow and 22 threads (with the current yaml)<br>
<br>
Then after it runs for a while - please send a pastbin output of your<br>
stats.log (the last section)<br>
<div class="HOEnZb"><div class="h5"><br>
<br>
Thanks<br>
<br>
<br>
<br>
--<br>
Regards,<br>
Peter Manev<br>
</div></div></blockquote></div><br></div>