[Oisf-users] tuning
Peter Manev
petermanev at gmail.com
Fri Jun 13 08:33:33 UTC 2014
On Fri, Jun 13, 2014 at 10:24 AM, Peter Manev <petermanev at gmail.com> wrote:
> On Thu, Jun 12, 2014 at 6:56 PM, Peter Manev <petermanev at gmail.com> wrote:
>> On Thu, Jun 12, 2014 at 11:41 AM, X.qing <xqing.summer at gmail.com> wrote:
>>> OK, i get it.
>>> The latest stats.log http://pastebin.com/P81PKgFf after i diabled
>>> vlan tracking.
>>
>>
>> What is the output of
>> ethtool -n eth3 rx-flow-hash udp6
>> ethtool -n eth3 rx-flow-hash udp4
>>
>> Disable those:
>> midstream: true
>> asyn-oneside: true
>>
>> to
>>
>> midstream: false
>> asyn-oneside: false
>>
>> What is the output of the first 5 lines of :
>> tcpstat -i eth3 -o "Time:%S\tn=%n\tavg=%a\tstddev=%d\tbps=%b\n" 1
>>
>> Try those settings for flow in suricata.yaml:
>> flow:
>> memcap: 4gb
>> hash-size: 15728640
>> prealloc: 8000000
>> emergency-recovery: 30
>>
>>
>> What is the output of :
>> ethtool -g eth3
>>
>> Make sure you use 16 threads in af packet
>> and you have cluster-type: cluster_cpu
>>
>> Change to:
>> http:
>> enabled: yes
>> memcap: 4gb
>>
>> also
>>
>> dns:
>> # memcaps. Globally and per flow/state.
>> global-memcap: 4gb
>> state-memcap: 512kb
>>
>>
>>
>> I see that the majority of the packets are 240-250 byte size ... Just
>> curious - what would be the reason for that?
>>
>> Thanks
>>
>>
>> --
>> Regards,
>> Peter Manev
>
>
>
> X.qing ->
> ------------------------------------------------------------
> ethtool -n eth3 rx-flow-hash udp6
> UDP over IPV6 flows use these fields for computing Hash flow key:
> IP SA
> IP DA
> L4 bytes 0 & 1 [TCP/UDP src port]
> L4 bytes 2 & 3 [TCP/UDP dst port]
>
> ethtool -n eth3 rx-flow-hash udp4
> UDP over IPV4 flows use these fields for computing Hash flow key:
> IP SA
> IP DA
> L4 bytes 0 & 1 [TCP/UDP src port]
> L4 bytes 2 & 3 [TCP/UDP dst port]
>
> tcpstat -i eth3 -o "Time:%S\tn=%n\tavg=%a\tstddev=%d\tbps=%b\n" 1
> Time:1402638168 n=1233147 avg=243.74 stddev=389.33 bps=2404526776.00
> Time:1402638169 n=1338878 avg=242.22 stddev=385.85 bps=2594470896.00
> Time:1402638170 n=1337129 avg=241.71 stddev=386.80 bps=2585554264.00
> Time:1402638171 n=1343252 avg=234.47 stddev=374.11 bps=2519645368.00
> Time:1402638172 n=1404989 avg=237.95 stddev=378.84 bps=2674528040.00
> Time:1402638173 n=1183470 avg=238.35 stddev=379.70 bps=2256653072.00
>
> ethtool -g eth3
> Ring parameters for eth3:
> Pre-set maximums:
> RX: 4096
> RX Mini: 0
> RX Jumbo: 0
> TX: 4096
> Current hardware settings:
> RX: 4096
> RX Mini: 0
> RX Jumbo: 0
> TX: 512
>
> the system's performance had no improvement just according to the drop
> rate after changing the yaml file .
>
> the majority of the packets are 240-250 byte size is the feature of
> the service the internet equipment offer.
>
>
> thanks
> best wishes :)
> X.qing <-
>
>
> --
> Regards,
> Peter Manev
Ok.
So this is a case whre you have a lot of small packets - about 1,4 mil
pps x ~~240 byte size (Just for comparison if the avg packet size is
850 the traffic would be about 9Gbps)
Then we have 2 options (i think)
1 - You need better CPU speed (>2.0, preferrably >= 2.7 Ghz)
2 - try with cluster_flow and 22 threads (with the current yaml)
Then after it runs for a while - please send a pastbin output of your
stats.log (the last section)
Thanks
--
Regards,
Peter Manev
More information about the Oisf-users
mailing list