[Oisf-users] Suricata, 10k rules, 10Gbit/sec and lots of RAM
Jason Holmes
jholmes at psu.edu
Mon Jan 11 17:32:12 UTC 2016
Hi Peter,
To the best of my knowledge, the binary/paths/configs/etc are in the
correct place. Suricata starts up successfully as long as I don't try
to use custom values.
It occurs to me that I might have not been 100% clear about that.
Suricata only segfaults on startup if I try to use a custom profile.
This works:
detect:
profile: high
custom-values:
toclient-groups: 1000
toserver-groups: 1000
This doesn't:
detect:
profile: custom
custom-values:
toclient-groups: 1000
toserver-groups: 1000
I'll look at things here some more and see if I can't figure out why
it's segfaulting when I try to use custom values.
Thanks,
--
Jason Holmes
On 1/11/16 12:22 PM, Peter Manev wrote:
> On Mon, Jan 11, 2016 at 6:02 PM, Jason Holmes <jholmes at psu.edu> wrote:
>> Hi Peter,
>>
>> I had used "detect:" instead of "detect-engine:" because that's the syntax
>> that is in the suricata.yaml that's included in the dev-detect-v173 branch.
>
> You are quite right indeed (i had it reversed out of too many flavors
> running) -
> https://github.com/inliniac/suricata/blob/dev-detect-grouping-v173/suricata.yaml.in#L593
>
> Do you have the binary/paths/configs/etc.... in the correct place and
> all that sort of things?
> It is working for me when i try it -
>
> [29092] 11/1/2016 -- 18:20:48 - (detect.c:3798) <Info>
> (SigAddressPrepareStage1) -- 17734 signatures processed. 1030 are
> IP-only rules, 5645 are inspecting packet payload, 13206 inspect
> application layer, 99 are decoder event only
> [29092] 11/1/2016 -- 18:20:48 - (detect.c:3801) <Info>
> (SigAddressPrepareStage1) -- building signature grouping structure,
> stage 1: preprocessing rules... complete
> [29092] 11/1/2016 -- 18:20:48 - (detect.c:3673) <Info>
> (RulesGroupByPorts) -- TCP toserver: 41 port groups, 41 unique SGH's,
> 0 copies
> [29092] 11/1/2016 -- 18:20:48 - (detect.c:3673) <Info>
> (RulesGroupByPorts) -- TCP toclient: 21 port groups, 21 unique SGH's,
> 0 copies
> [29092] 11/1/2016 -- 18:20:48 - (detect.c:3673) <Info>
> (RulesGroupByPorts) -- UDP toserver: 41 port groups, 31 unique SGH's,
> 10 copies
> [29092] 11/1/2016 -- 18:20:48 - (detect.c:3673) <Info>
> (RulesGroupByPorts) -- UDP toclient: 21 port groups, 15 unique SGH's,
> 6 copies
> [29092] 11/1/2016 -- 18:20:48 - (detect.c:3421) <Info>
> (RulesGroupByProto) -- OTHER toserver: 254 proto groups, 3 unique
> SGH's, 251 copies
> [29092] 11/1/2016 -- 18:20:48 - (detect.c:3457) <Info>
> (RulesGroupByProto) -- OTHER toclient: 254 proto groups, 0 unique
> SGH's, 254 copies
> [29092] 11/1/2016 -- 18:20:48 - (detect.c:4188) <Info>
> (SigAddressPrepareStage4) -- Unique rule groups: 111
> [29092] 11/1/2016 -- 18:20:48 - (detect-engine-mpm.c:822) <Info>
> (MpmStoreReportStats) -- Builtin MPM "toserver TCP packet": 30
> [29092] 11/1/2016 -- 18:20:48 - (detect-engine-mpm.c:822) <Info>
> (MpmStoreReportStats) -- Builtin MPM "toclient TCP packet": 19
> [29092] 11/1/2016 -- 18:20:48 - (detect-engine-mpm.c:822) <Info>
> (MpmStoreReportStats) -- Builtin MPM "toserver TCP stream": 33
> [29092] 11/1/2016 -- 18:20:48 - (detect-engine-mpm.c:822) <Info>
> (MpmStoreReportStats) -- Builtin MPM "toclient TCP stream": 21
> [29092] 11/1/2016 -- 18:20:48 - (detect-engine-mpm.c:822) <Info>
> (MpmStoreReportStats) -- Builtin MPM "toserver UDP packet": 30
> [29092] 11/1/2016 -- 18:20:48 - (detect-engine-mpm.c:822) <Info>
> (MpmStoreReportStats) -- Builtin MPM "toclient UDP packet": 14
> [29092] 11/1/2016 -- 18:20:48 - (detect-engine-mpm.c:822) <Info>
> (MpmStoreReportStats) -- Builtin MPM "other IP packet": 2
> [29092] 11/1/2016 -- 18:20:48 - (detect-engine-mpm.c:829) <Info>
> (MpmStoreReportStats) -- AppLayer MPM "toserver http_uri": 9
> [29092] 11/1/2016 -- 18:20:48 - (detect-engine-mpm.c:829) <Info>
> (MpmStoreReportStats) -- AppLayer MPM "toserver http_raw_uri": 2
> [29092] 11/1/2016 -- 18:20:48 - (detect-engine-mpm.c:829) <Info>
> (MpmStoreReportStats) -- AppLayer MPM "toserver http_header": 9
> [29092] 11/1/2016 -- 18:20:48 - (detect-engine-mpm.c:829) <Info>
> (MpmStoreReportStats) -- AppLayer MPM "toclient http_header": 4
> [29092] 11/1/2016 -- 18:20:48 - (detect-engine-mpm.c:829) <Info>
> (MpmStoreReportStats) -- AppLayer MPM "toserver http_user_agent": 3
> [29092] 11/1/2016 -- 18:20:48 - (detect-engine-mpm.c:829) <Info>
> (MpmStoreReportStats) -- AppLayer MPM "toserver http_raw_header": 1
> [29092] 11/1/2016 -- 18:20:48 - (detect-engine-mpm.c:829) <Info>
> (MpmStoreReportStats) -- AppLayer MPM "toserver http_method": 4
> [29092] 11/1/2016 -- 18:20:48 - (detect-engine-mpm.c:829) <Info>
> (MpmStoreReportStats) -- AppLayer MPM "toserver file_data": 1
> [29092] 11/1/2016 -- 18:20:48 - (detect-engine-mpm.c:829) <Info>
> (MpmStoreReportStats) -- AppLayer MPM "toclient file_data": 5
> [29092] 11/1/2016 -- 18:20:48 - (detect-engine-mpm.c:829) <Info>
> (MpmStoreReportStats) -- AppLayer MPM "toclient http_stat_code": 1
> [29092] 11/1/2016 -- 18:20:48 - (detect-engine-mpm.c:829) <Info>
> (MpmStoreReportStats) -- AppLayer MPM "toserver http_client_body": 5
> [29092] 11/1/2016 -- 18:20:48 - (detect-engine-mpm.c:829) <Info>
> (MpmStoreReportStats) -- AppLayer MPM "toserver http_cookie": 2
> [29092] 11/1/2016 -- 18:20:48 - (detect-engine-mpm.c:829) <Info>
> (MpmStoreReportStats) -- AppLayer MPM "toclient http_cookie": 3
>
>
>
>
>>
>> Per your suggestion, I tried:
>>
>> detect-engine:
>> profile: custom
>> custom-values:
>> toclient-groups: 1000
>> toserver-groups: 1000
>>
>> and it still crashed. I ran it inside of gdb and got this:
>>
>> Program received signal SIGSEGV, Segmentation fault.
>> __strcmp_sse42 () at ../sysdeps/x86_64/multiarch/strcmp-sse42.S:164
>> 164 movdqu (%rdi), %xmm1
>> (gdb) bt
>> #0 __strcmp_sse42 () at ../sysdeps/x86_64/multiarch/strcmp-sse42.S:164
>> #1 0x00000000005a84ce in SetupDelayedDetect (suri=0x7fffffffe2d0) at
>> suricata.c:1944
>> #2 0x00000000005a9b0c in main (argc=6, argv=0x7fffffffe4d8) at
>> suricata.c:2299
>>
>>
>> If I try:
>>
>> detect:
>> profile: custom
>> custom-values:
>> toclient-groups: 1000
>> toserver-groups: 1000
>>
>> I get:
>>
>> Program received signal SIGSEGV, Segmentation fault.
>> 0x000000000049da75 in DetectEngineCtxLoadConf (de_ctx=0x19e862e0) at
>> detect-engine.c:1025
>> 1025 TAILQ_FOREACH(opt, &de_ctx_custom->head, next) {
>> (gdb) bt
>> #0 0x000000000049da75 in DetectEngineCtxLoadConf (de_ctx=0x19e862e0) at
>> detect-engine.c:1025
>> #1 0x000000000049d42f in DetectEngineCtxInitReal (minimal=0, prefix=0x0) at
>> detect-engine.c:784
>> #2 0x000000000049d4ff in DetectEngineCtxInit () at detect-engine.c:825
>> #3 0x00000000005a9cc0 in main (argc=6, argv=0x7fffffffe4d8) at
>> suricata.c:2313
>>
>> Thanks,
>>
>> --
>> Jason Holmes
>>
>>
>> On 1/11/16 11:49 AM, Peter Manev wrote:
>>>
>>> On Mon, Jan 11, 2016 at 5:16 PM, Jason Holmes <jholmes at psu.edu> wrote:
>>>>
>>>> Hi,
>>>>
>>>> I just wanted to give some feedback on the grouping code branch
>>>> (dev-detect-grouping-v173). I was running 3.0rc3 with:
>>>>
>>>> detect-engine:
>>>> - profile: custom
>>>> - custom-values:
>>>> toclient-src-groups: 200
>>>> toclient-dst-groups: 200
>>>> toclient-sp-groups: 200
>>>> toclient-dp-groups: 300
>>>> toserver-src-groups: 200
>>>> toserver-dst-groups: 400
>>>> toserver-sp-groups: 200
>>>> toserver-dp-groups: 250
>>>>
>>>> I tested dev-detect-grouping-v173 with:
>>>>
>>>> detect:
>>>> profile: custom
>>>> custom-values:
>>>> toclient-groups: 1000
>>>> toserver-groups: 1000
>>>>
>>>> (Actually, I had to hardcode this into src/detect-engine.c because the
>>>> above
>>>> syntax caused Suricata to crash when starting up. I didn't dig into it
>>>> enough to figure out why.)
>>>
>>>
>>> I do not like that "hardcoding" part at all !
>>>
>>> Please not there can be a problem because of spelling and indention.
>>> Your config part should loo like this:
>>>
>>> detect-engine:
>>> - profile: custom
>>> - custom-values:
>>> toclient-groups: 1000
>>> toserver-groups: 1000
>>>
>>> not like this:
>>>
>>> detect:
>>> profile: custom
>>> custom-values:
>>> toclient-groups: 1000
>>> toserver-groups: 1000
>>>
>>>
>>>
>>> Can you please give it a try again and see if that was the problem?
>>>
>>> Thanks
>>>
>>>>
>>>> The impetus for trying this was that adding additional rules to 3.0rc3
>>>> caused packet loss to jump from <1% to ~25%. The <1% on 3.0rc3 was using
>>>> around 20,000 rules. The 25% on 3.0rc3 was using around 30,000 rules.
>>>>
>>>> My observations (using 30,000 rules):
>>>>
>>>> 1. Startup time is greatly reduced. With the above settings,
>>>> dev-detect-v173 starts up in about 2.5 minutes. 3.0rc3 took about 5.5
>>>> minutes.
>>>>
>>>> 2. Performance is significantly improved. Packet loss dropped from ~25%
>>>> with 3.0rc3 to <1% with dev-detect-v173. I'm also able to push more
>>>> traffic
>>>> through the box and maintain <1%. It's hard to quantify exactly since
>>>> this
>>>> is production traffic and it spikes and dips, but I'd say 25% more
>>>> traffic
>>>> would be a conservative estimate in increased throughput.
>>>>
>>>> I haven't had any stability issues that I wasn't already seeing in
>>>> 3.0rc3.
>>>> To me, the new grouping code branch seems like a fundamental improvement.
>>>>
>>>> Thanks,
>>>>
>>>> --
>>>> Jason Holmes
>>>>
>>>>
>>>> On 12/8/15 12:12 PM, Victor Julien wrote:
>>>>>
>>>>>
>>>>> On 04-12-15 18:03, Cooper F. Nelson wrote:
>>>>>>
>>>>>>
>>>>>> We are running the grouping code branch as well, ~7gbit traffic
>>>>>> and sampling port 80 flows. Using groups of 1000.
>>>>>>
>>>>>> Performance so far is very good, currently running 27,568 ETPRO
>>>>>> signatures.
>>>>>
>>>>>
>>>>>
>>>>> How does it compare to your normal performance? Are you seeing
>>>>> differences in memory use, drop rate, etc?
>>>>>
>>>>> Thanks,
>>>>> Victor
>>>>>
>>>>>
>>>>>> On 12/3/2015 4:56 PM, Michal Purzynski wrote:
>>>>>>>
>>>>>>>
>>>>>>> I kind of feel responsible here and should answer this question.
>>>>>>
>>>>>>
>>>>>>
>>>>>>> The grouping code branch will make it to Suricata post 3.0. Give.
>>>>>>> The new release schedule, this should be quick.
>>>>>>
>>>>>>
>>>>>>
>>>>>>> I'm testing it on production traffic, more than 20gbit, two
>>>>>>> sensors (peak, but frequent, long and crazy. Average is between 3
>>>>>>> to 6gbit/sec).
>>>>>>
>>>>>>
>>>>>>
>>>>>>> In order to stress the code I run it with even more insane
>>>>>>> settings, like this
>>>>>>
>>>>>>
>>>>>>
>>>>>>> detect-engine: - profile: custom - custom-values:
>>>>>>> toclient-src-groups: 2000 toclient-dst-groups: 2000
>>>>>>> toclient-sp-groups: 2000 toclient-dp-groups: 3000
>>>>>>> toserver-src-groups: 2000 toserver-dst-groups: 4000
>>>>>>> toserver-sp-groups: 2000 toserver-dp-groups: 2500 -
>>>>>>> sgh-mpm-context: full - inspection-recursion-limit: 3000 -
>>>>>>> rule-reload: true
>>>>>>
>>>>>>
>>>>>>
>>>>>>> Note - do not try this at home. Or work. It kills kittens on 2.x
>>>>>>
>>>>>>
>>>>>>
>>>>>>> And it just works on the new branch that's yet to be merged :)
>>>>>>
>>>>>>
>>>>>>
>>>>>>> Note - I have over 16500 rules now.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> _______________________________________________ Suricata IDS Users
>>>>>> mailing list: oisf-users at openinfosecfoundation.org Site:
>>>>>> http://suricata-ids.org | Support:
>>>>>> http://suricata-ids.org/support/ List:
>>>>>> https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>>>>>>
>>>>>>
>>>>> Suricata User Conference November 4 & 5 in Barcelona:
>>>>> http://oisfevents.net
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>> _______________________________________________
>>>> Suricata IDS Users mailing list: oisf-users at openinfosecfoundation.org
>>>> Site: http://suricata-ids.org | Support: http://suricata-ids.org/support/
>>>> List: https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>>>> Suricata User Conference November 4 & 5 in Barcelona:
>>>> http://oisfevents.net
>>>
>>>
>>>
>>>
>>
>
>
>
More information about the Oisf-users
mailing list