[Oisf-devel] suricata testing
Will Metcalf
william.metcalf at gmail.com
Mon May 17 12:03:24 UTC 2010
Ya one thing I suggest off the top of my head is to increase
max-pending-packets: 50
in the suricata.yaml to something like
max-pending-packets: 5000
The default value is way to low especially for testing at gig speeds.
This will increase your mem usage btw.
Regards,
Will
On Mon, May 17, 2010 at 1:56 AM, rmkml <rmkml at free.fr> wrote:
> ok next test results with suricata (same today git/same sp][rent network
> trafic) but compiled with:
> ./configure --with-libyaml-includes=/home/suricata/yaml-0.1.3/include
> --with-libyaml-libraries=/home/suricata/yaml-0.1.3/src/.libs
> --enable-gccprofile
> make
> Joigned gprof output (gzip) results.
> ...
> Each sample counts as 0.01 seconds.
> % cumulative self self total
> time seconds seconds calls s/call s/call name
> 90.38 3359.34 3359.34 605708 0.01 0.01 B2gSearchBNDMq
> 4.25 3517.28 157.94 1676293603 0.00 0.00 MpmVerifyMatch
> 2.60 3614.10 96.82 931669013 0.00 0.00 B2gBloomHash
> ...
>
> Maybe already exist option in conf for better (performance) results?
> Regards
> Rmkml
>
>
>
> On Mon, 17 May 2010, rmkml wrote:
>
>> Hi,
>> 1)ok, first I have replaying my last performance test with same result
>> (~1597% cpu).
>> 2)ok now test with 'CFLAGS="-O3 -march=i686 -mtune=i686" ./configure...':
>> same performance result (~1597% cpu)
>> 3)ok downloaded and compiled suricata today git
>> (548a3b2c93aed79e39a34ee9dd4c68f43a27f363)
>> small better (cpu) result: (~1597% cpu -> ~1458% cpu)
>> stat.log output:
>> decoder.pkts | Decode1 | 6771890
>> decoder.pkts_per_sec | Decode1 | 15203.400000
>> decoder.bytes | Decode1 | 9859871840
>> decoder.bytes_per_sec | Decode1 | 22136150.400000
>> decoder.mbit_per_sec | Decode1 | 177.089203
>> decoder.ipv4 | Decode1 | 6771890
>> decoder.ethernet | Decode1 | 6771890
>> decoder.udp | Decode1 | 6771890
>> decoder.avg_pkt_size | Decode1 | 1456.000000
>> decoder.max_pkt_size | Decode1 | 1456
>> ...
>> top output:
>> top - 10:22:16 up 4 days, 17:45, 3 users, load average: 15.25, 14.12,
>> 9.60
>> Tasks: 237 total, 1 running, 236 sleeping, 0 stopped, 0 zombie
>> Cpu0 : 12.0%us, 20.0%sy, 66.0%ni, 2.0%id, 0.0%wa, 0.0%hi, 0.0%si,
>> 0.0%st
>> Cpu1 : 92.0%us, 0.0%sy, 0.0%ni, 8.0%id, 0.0%wa, 0.0%hi, 0.0%si,
>> 0.0%st
>> Cpu2 : 87.1%us, 1.0%sy, 0.0%ni, 11.9%id, 0.0%wa, 0.0%hi, 0.0%si,
>> 0.0%st
>> Cpu3 : 92.9%us, 0.0%sy, 0.0%ni, 7.1%id, 0.0%wa, 0.0%hi, 0.0%si,
>> 0.0%st
>> Cpu4 : 90.0%us, 0.0%sy, 0.0%ni, 10.0%id, 0.0%wa, 0.0%hi, 0.0%si,
>> 0.0%st
>> Cpu5 : 92.1%us, 1.0%sy, 0.0%ni, 6.9%id, 0.0%wa, 0.0%hi, 0.0%si,
>> 0.0%st
>> Cpu6 : 89.0%us, 1.0%sy, 0.0%ni, 10.0%id, 0.0%wa, 0.0%hi, 0.0%si,
>> 0.0%st
>> Cpu7 : 92.0%us, 0.0%sy, 0.0%ni, 8.0%id, 0.0%wa, 0.0%hi, 0.0%si,
>> 0.0%st
>> Cpu8 : 88.1%us, 2.0%sy, 0.0%ni, 9.9%id, 0.0%wa, 0.0%hi, 0.0%si,
>> 0.0%st
>> Cpu9 : 91.0%us, 1.0%sy, 0.0%ni, 8.0%id, 0.0%wa, 0.0%hi, 0.0%si,
>> 0.0%st
>> Cpu10 : 88.0%us, 0.0%sy, 0.0%ni, 12.0%id, 0.0%wa, 0.0%hi, 0.0%si,
>> 0.0%st
>> Cpu11 : 93.0%us, 1.0%sy, 0.0%ni, 6.0%id, 0.0%wa, 0.0%hi, 0.0%si,
>> 0.0%st
>> Cpu12 : 89.0%us, 1.0%sy, 0.0%ni, 10.0%id, 0.0%wa, 0.0%hi, 0.0%si,
>> 0.0%st
>> Cpu13 : 90.1%us, 1.0%sy, 0.0%ni, 8.9%id, 0.0%wa, 0.0%hi, 0.0%si,
>> 0.0%st
>> Cpu14 : 90.9%us, 0.0%sy, 0.0%ni, 9.1%id, 0.0%wa, 0.0%hi, 0.0%si,
>> 0.0%st
>> Cpu15 : 86.1%us, 1.0%sy, 0.0%ni, 6.9%id, 0.0%wa, 0.0%hi, 5.9%si,
>> 0.0%st
>> Mem: 12464792k total, 12040092k used, 424700k free, 136468k buffers
>> Swap: 10482404k total, 2232k used, 10480172k free, 7602164k cached
>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>> 24508 root 15 0 495m 244m 1360 S 1458.9 2.0 177:09.86 suricata
>>
>> 4) next test: enable gcc profiling...
>> Regards
>> Rmkml
>>
>>
>> On Fri, 14 May 2010, rmkml wrote:
>>
>>> Hi Gurvinder,
>>> thx for comments,
>>> sp$rent not sending icmp, only big udp packet (in this case).
>>> sorry, tcp it's not possible at this time.
>>> Regards
>>> Rmkml
>>>
>>>
>>> On Fri, 14 May 2010, Gurvinder Singh wrote:
>>>
>>>> Thanks rmkml for the interesting numbers. I just wonder if there were
>>>> any ICMP packets with the UDP traffic or not. As there is a known issue on
>>>> todo list to fix the slowdown when ICMP and UDP are together in the traffic.
>>>> If possible can you also test the engine with TCP traffic or just UDP
>>>> traffic, you can have ICMP with TCP, as ICMP handling with TCP traffic is
>>>> fine.
>>>> Cheers,
>>>> Gurvinder
>>>>
>>>> rmkml wrote:
>>>>>
>>>>> Thx Victor and Will for reply,
>>>>> Im reply for victor question: no, 1) test it's ~150Mbit udp, then 3)
>>>>> test it's ~1Gbit udp...
>>>>>
>>>>> and for Will question, I have created new test 4) with emerging-threat
>>>>> rules (thx all and matt) at 12.668/18.496.000octet/148MBit (15% sending
>>>>> possibility):
>>>>> {today downloaded+unzip
>>>>> http://www.emergingthreats.net/rules/emerging-all.rules.zip and use on
>>>>> suricata engine without modification}
>>>>> stats.log output:
>>>>> decoder.pkts | Decode1 | 4946191
>>>>> decoder.pkts_per_sec | Decode1 | 19003.333333
>>>>> decoder.bytes | Decode1 | 7201654096
>>>>> decoder.bytes_per_sec | Decode1 | 27668853.333333
>>>>> decoder.mbit_per_sec | Decode1 | 221.350827
>>>>> decoder.ipv4 | Decode1 | 4946191
>>>>> decoder.ethernet | Decode1 | 4946191
>>>>> decoder.udp | Decode1 | 4946191
>>>>> decoder.avg_pkt_size | Decode1 | 1456.000000
>>>>> decoder.max_pkt_size | Decode1 | 1456
>>>>> ...
>>>>> and top output:
>>>>> top - 16:21:44 up 1 day, 23:45, 5 users, load average: 9.90, 8.33,
>>>>> 6.15
>>>>> Tasks: 241 total, 1 running, 240 sleeping, 0 stopped, 0 zombie
>>>>> Cpu0 : 29.7%us, 29.7%sy, 23.8%ni, 16.8%id, 0.0%wa, 0.0%hi, 0.0%si,
>>>>> 0.0%st
>>>>> Cpu1 : 48.5%us, 4.0%sy, 0.0%ni, 47.5%id, 0.0%wa, 0.0%hi, 0.0%si,
>>>>> 0.0%st
>>>>> Cpu2 : 49.0%us, 2.9%sy, 0.0%ni, 48.0%id, 0.0%wa, 0.0%hi, 0.0%si,
>>>>> 0.0%st
>>>>> Cpu3 : 49.0%us, 3.9%sy, 0.0%ni, 47.1%id, 0.0%wa, 0.0%hi, 0.0%si,
>>>>> 0.0%st
>>>>> Cpu4 : 49.5%us, 3.0%sy, 0.0%ni, 47.5%id, 0.0%wa, 0.0%hi, 0.0%si,
>>>>> 0.0%st
>>>>> Cpu5 : 48.0%us, 2.9%sy, 0.0%ni, 49.0%id, 0.0%wa, 0.0%hi, 0.0%si,
>>>>> 0.0%st
>>>>> Cpu6 : 52.0%us, 2.0%sy, 0.0%ni, 46.0%id, 0.0%wa, 0.0%hi, 0.0%si,
>>>>> 0.0%st
>>>>> Cpu7 : 52.5%us, 4.0%sy, 0.0%ni, 43.6%id, 0.0%wa, 0.0%hi, 0.0%si,
>>>>> 0.0%st
>>>>> Cpu8 : 57.8%us, 2.0%sy, 0.0%ni, 40.2%id, 0.0%wa, 0.0%hi, 0.0%si,
>>>>> 0.0%st
>>>>> Cpu9 : 47.5%us, 3.0%sy, 0.0%ni, 49.5%id, 0.0%wa, 0.0%hi, 0.0%si,
>>>>> 0.0%st
>>>>> Cpu10 : 49.5%us, 4.0%sy, 0.0%ni, 46.5%id, 0.0%wa, 0.0%hi, 0.0%si,
>>>>> 0.0%st
>>>>> Cpu11 : 49.5%us, 3.0%sy, 0.0%ni, 47.5%id, 0.0%wa, 0.0%hi, 0.0%si,
>>>>> 0.0%st
>>>>> Cpu12 : 47.5%us, 3.0%sy, 0.0%ni, 49.5%id, 0.0%wa, 0.0%hi, 0.0%si,
>>>>> 0.0%st
>>>>> Cpu13 : 49.5%us, 2.0%sy, 0.0%ni, 48.5%id, 0.0%wa, 0.0%hi, 0.0%si,
>>>>> 0.0%st
>>>>> Cpu14 : 49.0%us, 3.0%sy, 0.0%ni, 48.0%id, 0.0%wa, 0.0%hi, 0.0%si,
>>>>> 0.0%st
>>>>> Cpu15 : 37.6%us, 2.0%sy, 0.0%ni, 33.7%id, 0.0%wa, 0.0%hi, 26.7%si,
>>>>> 0.0%st
>>>>> Mem: 12464792k total, 12001688k used, 463104k free, 174304k
>>>>> buffers
>>>>> Swap: 10482404k total, 2144k used, 10480260k free, 9287904k cached
>>>>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>>>>> 28321 root 15 0 380m 130m 1348 S 876.4 1.1 68:03.01 suricata
>>>>> Regards
>>>>> Rmkml
>>>>>
>>>>>
>>>>> On Fri, 14 May 2010, Victor Julien wrote:
>>>>>
>>>>>> Interesting numbers rmkml, thanks. The pkts_per_sec, bytes_per_sec and
>>>>>> mbit_per_sec counters are completely unreliable at this point, fixing
>>>>>> them is still on our todo list.
>>>>>>
>>>>>> Did both test runs send the same amount of packets? I see that the
>>>>>> sigs
>>>>>> run did 4M packets, the bare run 27M.
>>>>>>
>>>>>> Cheers,
>>>>>> Victor
>
> _______________________________________________
> Oisf-devel mailing list
> Oisf-devel at openinfosecfoundation.org
> http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-devel
>
>
More information about the Oisf-devel
mailing list