[Oisf-users] memcap_drop in stats.log

Gene Albin gene.albin at gmail.com
Mon Aug 1 22:29:29 UTC 2011


Intesting problem just occurred while trying to detemine the max memory
utilization.

I received a segment fault after ~9 minutes of runtime on live traffic.
 Since I was watching the memory utilization I can report that it used about
1GB during startup and once it started reading packets the memory
utilization just continued to increase by about 1.5MB/sec until it reached
~2GB then segment fault.  I thought it might have been a fluke so I ran
suricata again with the same results.  Seg falut at ~2GB of memory
utilization.  According to top the suricata process at it's peak was using
about 11% of the system memory.  I have 16GB allocated to the VM with 4
processors.  CPU utilization was humming along at about 200% (50% on each
core).

For comparison I ran suricata against a 6GB pcap file I had on hand and
watch the memory increase by about 1GB as well with top reporting 11.5%
memory utilization, however on the pcap file it did NOT seg fault.

Also of note, at around 8 and a half minutes into the live run (not the pcap
one) my segment_memcap_drop counter started to increase by about 200-300
packets every 8 minutes.

Any suggestions on what may have caused the segment fault?  Attached is the
suricata.yaml file I used during this run.

Thanks,
Gene

On Mon, Aug 1, 2011 at 2:10 PM, Fernando Ortiz
<fernando.ortiz.f at gmail.com>wrote:

> I once asked something similar:
>
> http://lists.openinfosecfoundation.org/pipermail/oisf-users/2011-June/000658.html
>
> Just for curiosity. What is your maximum consumption of RAM while running
> suricata?
>
>
> 2011/8/1 Gene Albin <gene.albin at gmail.com>
>
>> So it looks like increasing the stream and flow memcap variables to 1 and
>> 2 GB seems to have fixed the segment_memcap_drop numbers:
>>
>> tcp.sessions              | Decode & Stream           | 62179
>>  tcp.ssn_memcap_drop       | Decode & Stream           | 0
>> tcp.pseudo                | Decode & Stream           | 10873
>> tcp.segment_memcap_drop   | Decode & Stream           | 0
>> tcp.stream_depth_reached  | Decode & Stream           | 347
>> detect.alert              | Detect                    | 715
>>
>> But according to (ReceivePcapThreadExitStats) I'm still losing about 20%
>> of my packets.  Any ideas on why this may be?  Below is a cut from the
>> suricata.log file showing the packet drops after I increased the memcap
>> values.
>>
>> Increased Flow memcap from 32MB to 1GB
>> No change:
>>
>> [11736] 1/8/2011 -- 13:27:07 - (source-pcap.c:561) <Info>
>> (ReceivePcapThreadExitStats) -- (ReceivePcap) Packets 1784959, bytes
>> 1318154313
>> [11736] 1/8/2011 -- 13:27:07 - (source-pcap.c:569) <Info>
>> (ReceivePcapThreadExitStats) -- (ReceivePcap) Pcap Total:3865595
>> Recv:2825319 Drop:1040276 (26.9%).
>>
>> Increased Stream memcap from 32MB to 1GB
>> Increased Stream reassembly memcap from 64MB to 2GB
>> No change:
>>
>> [11955] 1/8/2011 -- 13:34:38 - (source-pcap.c:561) <Info>
>> (ReceivePcapThreadExitStats) -- (ReceivePcap) Packets 2906643, bytes
>> 1977212962
>> [11955] 1/8/2011 -- 13:34:38 - (source-pcap.c:569) <Info>
>> (ReceivePcapThreadExitStats) -- (ReceivePcap) Pcap Total:5634300
>> Recv:4270513 Drop:1363787 (24.2%).
>>
>> Gene
>>
>>
>> On Fri, Jul 29, 2011 at 8:17 PM, Gene Albin <gene.albin at gmail.com> wrote:
>>
>>> What causes the tcp.segment_memcap_drop and the tcp.ssn_memcap_drop
>>> counters to increment in the stats.log file?  I haven't found much of a
>>> description or suggestions on what I can do to reduce the number.  Here is a
>>> cut from my stats.log file:
>>>
>>> tcp.sessions              | Decode & Stream           | 569818
>>> tcp.ssn_memcap_drop       | Decode & Stream           | 0
>>> tcp.pseudo                | Decode & Stream           | 94588
>>> tcp.segment_memcap_drop   | Decode & Stream           | 11204200
>>> tcp.stream_depth_reached  | Decode & Stream           | 14
>>> detect.alert              | Detect                    | 13239
>>>
>>> Thanks for any suggestions.
>>>
>>> Gene
>>>
>>> --
>>> Gene Albin
>>> gene.albin at gmail.com
>>>
>>>
>>
>>
>> --
>> Gene Albin
>> gene.albin at gmail.com
>>
>>
>> _______________________________________________
>> Oisf-users mailing list
>> Oisf-users at openinfosecfoundation.org
>> http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>>
>>
>
>
>


-- 
Gene Albin
gene.albin at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20110801/bd6bb606/attachment-0002.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: suricata.yaml
Type: application/x-yaml
Size: 28177 bytes
Desc: not available
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20110801/bd6bb606/attachment.bin>


More information about the Oisf-users mailing list