[Oisf-users] suricata memory leak?

Michael Stone mstone at mathom.us
Tue Feb 5 12:20:01 UTC 2019


On Fri, Feb 01, 2019 at 09:55:41AM +0100, Peter Manev wrote:
>On Wed, Jan 23, 2019 at 3:30 PM Michael Stone <mstone at mathom.us> wrote:
>> I'm ready to declare success on this. I'll keep talking about the one
>> sensor because the effect is so pronounced, but the general pattern is
>> holding true across all of them. There's a nightly SMB traffic spike,
>> during which suricata's memory consumption increases significantly.
>> (But, the memory numbers in stats.log do not reflect this!) With the
>
>The "memory" stats do not reflect it at all?

Not at all.

>What about total packet counts or other app layer counts - is that
>reflected there ?? (example - app_layer.flow.smb , app_layer.tx.smb )

Yes.

>> older version of rust, the memory consumption did not drop all the way
>> back again after the nightly traffic spike, and eventually the machine
>> would OOM. With rust 1.31, when the traffic spike ends the memory
>> consumption returns to its original level, and the machine keeps
>> running. So I'm going to speculate that in this case the memory is being
>> consumed in the rust SMB parser. Nothing particularly jumps out at me in
>> the rust changelog between 1.24 and 1.31 but there have been ongoing
>> improvements over time. (In fact, the notes for 1.32 include "The
>> default allocator has changed from jemalloc to the default allocator on
>> your system." so that's another presumably big change to watch for.)
>
>You have observed this with SMB related traffic only or other type too ?

Well, since there's nothing in the stats to actually show where the 
memory is going, smb is just a guess based on the traffic volumes.

Mike Stone


More information about the Oisf-users mailing list