[Oisf-users] suricata memory leak?

Michael Stone mstone at mathom.us
Mon Jan 21 19:01:25 UTC 2019


On Mon, Jan 21, 2019 at 08:34:04AM +0100, Peter Manev wrote:
>On Fri, Jan 18, 2019 at 2:04 PM Michael Stone <mstone at mathom.us> wrote:
>>
>> On Fri, Jan 18, 2019 at 09:13:35AM +0100, you wrote:
>> >On Thu, Jan 17, 2019 at 3:36 PM Michael Stone <mstone at mathom.us> wrote:
>> >>
>> >> On Thu, Jan 17, 2019 at 02:15:38PM +0100, you wrote:
>> >> >On Thu, Jan 17, 2019 at 2:04 PM Michael Stone <mstone at mathom.us> wrote:
>> >> >>
>> >> >> Are other people noticing much higher memory consumption in suricata
>> >> >> 4.1? As an example, one instance of 4.1.2 I started yesterday grew from
>> >> >> 7G memory used to 21G overnight. There's nothing in the memory stats
>> >> >> output to account for the change, so I'm not sure where the memory is
>> >> >> going.
>> >> >
>> >
>> >Is it a case where it is still at 21G all the time or it peaked to 21G
>> >and then returned to lower value?
>>
>> It typically grows unbounded unless linux OOM starts killing pieces.
>> That particular system has 32G RAM so it hits the wall a lot sooner, but
>> the ones with 100+G RAM just have a longer runway to grow. I've got
>> dozens of sensors deployed, and they grow at different rates. Some are
>> much, much smaller and aren't growing much at all (but they see almost
>> no traffic). OTOH, I've got two 10gbps sensors that I restarted suricata
>> on yesterday, and one is at 15GB RAM and the other is at 27GB RAM. Same
>> initial config, so I guess it's somehow related to traffic, but I don't
>> know what. The sensor I talked about initially, that went from 7 to 21G,
>> was restarted yesterday and today is still at 7 after the same amount of
>> runtime on the same network. The inconsistency is part of why it's taken
>
>So it appears that some sensors have the issue some dont but the ones
>that have it are displaying the behavior inconsistently, correct ?

Most of the sensors were showing much larger memory footprints with 4.1 
than with 4.0. The difference was that if the memory consumption doubled 
or tripled on a machine with >4x more RAM, the impact was a lot lower.

>> >> >How much traffic are you inspecting and what is the output (to begin with) of
>> >> >suricata --dump-config |grep mem
>> >> >?
>> >>
>> >> Not particularly high traffic at this sensor at a branch office, about
>> >> 25Mbps daytime average and 200Mbps during overnight backups. Seeing the
>> >> same problem on high and low volume links.
>> >>
>> >> defrag.memcap = 256mb
>> >> flow.memcap = 512mb
>> >> stream.memcap = 128mb
>> >> stream.reassembly.memcap = 128mb
>> >> host.memcap = 16777216
>> >>
>> >
>
>That sensor(config) is the one that goes up to 21G consumption sometimes ?

Yes. On a positive note, I rebuilt with rust 1.31 and pushed that over
the weekend, and the RSS on that sensor has remained constant since 
then. I'll see if that continues going forward.

>> >How do you  start Suricata  and how do you reload rules ?
>>
>> /usr/bin/suricata -D --af-packet -c /etc/suricata/suricata.yaml --pidfile /var/run/suricata.pid
>>
>> suricatasc -c reload-rules
>>
>> It's not the case that the memory consumption jumps immediately after a
>> reload.
>
>
>So the sensors that exhibit this behavior  - have the ruleset reloaded
>, not Suricata restarted - right ?

True for all of them.


More information about the Oisf-users mailing list