[Oisf-devel] tcp.ssn_memcap_drop
Martin Holste
mcholste at gmail.com
Tue Sep 20 18:06:28 UTC 2011
Yes, I was running on the git HEAD as of the 18th. I'll give your rev a shot.
...
Much better performance! Seeing in the neighborhood of only about
25-33% drop rate on peak traffic (2-3k sessions/sec). Not perfect,
but far better than before. Get that code merged!
However, the segfault I've seen with all recent Suricatas is still present:
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffedc02700 (LWP 29336)]
*__GI___libc_free (mem=0x4000000087394a0) at malloc.c:3709
3709 malloc.c: No such file or directory.
in malloc.c
(gdb) bt
#0 *__GI___libc_free (mem=0x4000000087394a0) at malloc.c:3709
#1 0x00007ffff7bd786e in htp_tx_destroy (tx=0x1aac82c0) at
htp_transaction.c:115
#2 0x00007ffff7bd4eb2 in htp_conn_destroy (conn=0x19894020) at
htp_connection.c:65
#3 0x00007ffff7bd00f2 in htp_connp_destroy_all (connp=0x19893ec0) at
htp_connection_parser.c:197
#4 0x000000000062647a in HTPStateFree (state=<value optimized out>)
at app-layer-htp.c:210
#5 0x0000000000619ffe in AppLayerParserCleanupState
(f=0x7fff87bed5c0) at app-layer-parser.c:1240
#6 0x0000000000437e45 in FlowL7DataPtrFree (f=0x4000000087394a0) at flow.c:119
#7 0x0000000000437eb2 in FlowClearMemory (f=0x7fff87bed5c0,
proto_map=0 '\000') at flow.c:1406
#8 0x0000000000438093 in FlowPrune (q=0x94b190, ts=0x7fffedc01630) at
flow.c:336
#9 0x0000000000439d17 in FlowPruneFlowQueue (td=<value optimized
out>) at flow.c:355
#10 FlowManagerThread (td=<value optimized out>) at flow.c:1060
#11 0x00007ffff71399ca in start_thread (arg=<value optimized out>) at
pthread_create.c:300
#12 0x00007ffff6a4270d in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:112
#13 0x0000000000000000 in ?? ()
On Tue, Sep 20, 2011 at 12:52 AM, Anoop Saldanha <poonaatsoc at gmail.com> wrote:
> You are running the lastest master?
>
> Can you reset your git head to this commit and check how the engine
> behaves with the same ruleset?
>
> cc4e89fbe1477d47e50fd720127e7c28d0d512ba
>
> On Mon, Sep 19, 2011 at 10:43 PM, Martin Holste <mcholste at gmail.com> wrote:
>> Now I've reduced the ruleset to a single rule--my heartbeat sig, and
>> it's still missing almost all the time. It appears that there's a
>> major bottleneck in the flow distributer somewhere if the system can't
>> grep for a single stream on just 600 Mb/sec. Anyone else running
>> heartbeat sigs and seeing the same thing?
>>
>> On Mon, Sep 19, 2011 at 11:06 AM, Martin Holste <mcholste at gmail.com> wrote:
>>> Ok, I'm giving that a shot, but so far that doesn't seem to have
>>> improved things. Right now, it looks like the system is missing a ton
>>> of heartbeats, so it's definitely not detecting everything even though
>>> all the drop counters are zero. I'm running just 3k signatures on
>>> about 600 Mb/sec of HTTP on 8 CPU/16 GB system.
>>>
>>> On Mon, Sep 19, 2011 at 10:48 AM, Victor Julien <victor at inliniac.net> wrote:
>>>> On 09/19/2011 05:43 PM, Martin Holste wrote:
>>>>> I've got memcap at 4GB and max_sessions is 256k by default. I'm
>>>>
>>>> You may want to try setting it a bit lower than the 4GB max, like 3.5GB
>>>> or so. I think I've seen at least one occasion where it didn't behave
>>>> properly with the max setting. Something we need to look into still.
>>>>
>>>> Cheers,
>>>> Victor
>>>>
>>>>> having better luck now with more drastic emergency flow pruning:
>>>>>
>>>>> flow:
>>>>> #memcap: 33554432
>>>>> memcap: 4294967295
>>>>> #hash_size: 65536
>>>>> hash_size: 268435456
>>>>> prealloc: 10000
>>>>> emergency_recovery: 40 #30
>>>>> prune_flows: 500 #5
>>>>>
>>>>> flow-timeouts:
>>>>> default:
>>>>> new: 1 # 30
>>>>> established: 10 #300
>>>>> closed: 0
>>>>> emergency_new: 1 #10
>>>>> emergency_established: 1 #100
>>>>> emergency_closed: 0
>>>>> tcp:
>>>>> new: 1 #60
>>>>> established: 10 #3600
>>>>> closed: 120
>>>>> emergency_new: 1 #10
>>>>> emergency_established: 1 #300
>>>>> emergency_closed: 20
>>>>> udp:
>>>>> new: 1 #30
>>>>> established: 1 #300
>>>>> emergency_new: 1 #10
>>>>> emergency_established: 1 #100
>>>>> icmp:
>>>>> new: 1 #30
>>>>> established: 1 #300
>>>>> emergency_new: 1 #10
>>>>> emergency_established: 1 #100
>>>>>
>>>>> I'm not yet sure how this will affect detection, but prior to this,
>>>>> most new flows were being discarded. This policy should favor new
>>>>> flows at the expense of old flows, which for malware detection should
>>>>> be desired.
>>>>>
>>>>> On Mon, Sep 19, 2011 at 10:30 AM, Anoop Saldanha <poonaatsoc at gmail.com> wrote:
>>>>>> stream:
>>>>>> memcap: 33554432 # 32mb
>>>>>>
>>>>>> At the same time, you might also want to set max_sessions to something
>>>>>> bigger. We default to 256k. You can try a bigger no and see how that
>>>>>> works out
>>>>>>
>>>>>> On Mon, Sep 19, 2011 at 8:07 PM, Martin Holste <mcholste at gmail.com> wrote:
>>>>>>> I'm seeing a ton of tcp.ssn_memcap_drop in my stats.log. Which memcap
>>>>>>> do I need to tweak to decrease these drops? I've already set them all
>>>>>>> to 4GB.
>>>>>>> _______________________________________________
>>>>>>> Oisf-devel mailing list
>>>>>>> Oisf-devel at openinfosecfoundation.org
>>>>>>> http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-devel
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Anoop Saldanha
>>>>>>
>>>>> _______________________________________________
>>>>> Oisf-devel mailing list
>>>>> Oisf-devel at openinfosecfoundation.org
>>>>> http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-devel
>>>>>
>>>>
>>>>
>>>> --
>>>> ---------------------------------------------
>>>> Victor Julien
>>>> http://www.inliniac.net/
>>>> PGP: http://www.inliniac.net/victorjulien.asc
>>>> ---------------------------------------------
>>>>
>>>> _______________________________________________
>>>> Oisf-devel mailing list
>>>> Oisf-devel at openinfosecfoundation.org
>>>> http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-devel
>>>>
>>>
>> _______________________________________________
>> Oisf-devel mailing list
>> Oisf-devel at openinfosecfoundation.org
>> http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-devel
>>
>
>
>
> --
> Anoop Saldanha
>
More information about the Oisf-devel
mailing list