[Oisf-users] memcap_drop in stats.log

Gene Albin gene.albin at gmail.com
Tue Aug 9 20:44:45 UTC 2011


Rmkml,
  Original suricata.yaml file (original memcap settings) works fine.  In
fact, I was able to double them (64MB, etc) and get it to run for several
hours without crashing.  While watching the memory during these runs it
appears that the memory utilization was increasing throughout the entire
run.  That leads me to believe that I'm pushing the 4GB limit imposed by the
32-bit OS.  Since I don't have the time left to restart with a 64-bit OS I
think I'm going to just get by with lower memcap values and deal for now.

  I haven't tried to disable all sigs, but I'd imagine that would work too
since it loads them into memory.  I might try that if I get some more time.
 Thanks for the suggestions.

Gene

On Mon, Aug 1, 2011 at 11:29 PM, rmkml <rmkml at yahoo.fr> wrote:

> Hi Gene,
> Sorry I didn't help, but Im curious if you have same pb with:
> -revert to original conf suricata.yaml please? (principally revert
> stream/memcap and stream/reassembly/memcap default value)
> -comment/disable all sigs
> Regards
> Rmkml
>
>
>
> On Mon, 1 Aug 2011, Gene Albin wrote:
>
>  Intesting problem just occurred while trying to detemine the max memory
>> utilization.
>> I received a segment fault after ~9 minutes of runtime on live traffic.
>>  Since I was watching the memory utilization I can report that it used about
>> 1GB during startup and once it started reading packets the memory
>> utilization just
>> continued to increase by about 1.5MB/sec until it reached ~2GB then
>> segment fault.  I thought it might have been a fluke so I ran suricata again
>> with the same results.  Seg falut at ~2GB of memory utilization.  According
>> to top the
>> suricata process at it's peak was using about 11% of the system memory.  I
>> have 16GB allocated to the VM with 4 processors.  CPU utilization was
>> humming along at about 200% (50% on each core).
>>
>> For comparison I ran suricata against a 6GB pcap file I had on hand and
>> watch the memory increase by about 1GB as well with top reporting 11.5%
>> memory utilization, however on the pcap file it did NOT seg fault.
>>
>> Also of note, at around 8 and a half minutes into the live run (not the
>> pcap one) my segment_memcap_drop counter started to increase by about
>> 200-300 packets every 8 minutes.
>>
>> Any suggestions on what may have caused the segment fault?  Attached is
>> the suricata.yaml file I used during this run.
>>
>> Thanks,
>> Gene
>>
>> On Mon, Aug 1, 2011 at 2:10 PM, Fernando Ortiz <
>> fernando.ortiz.f at gmail.com> wrote:
>>      I once asked something similar:
>> http://lists.**openinfosecfoundation.org/**pipermail/oisf-users/2011-**
>> June/000658.html<http://lists.openinfosecfoundation.org/pipermail/oisf-users/2011-June/000658.html>
>> Just for curiosity. What is your maximum consumption of RAM while running
>> suricata?
>>
>>
>> 2011/8/1 Gene Albin <gene.albin at gmail.com>
>> So it looks like increasing the stream and flow memcap variables to 1 and
>> 2 GB seems to have fixed the segment_memcap_drop numbers:
>> tcp.sessions              | Decode & Stream           | 62179
>> tcp.ssn_memcap_drop       | Decode & Stream           | 0
>> tcp.pseudo                | Decode & Stream           | 10873
>> tcp.segment_memcap_drop   | Decode & Stream           | 0
>> tcp.stream_depth_reached  | Decode & Stream           | 347
>> detect.alert              | Detect                    | 715
>>
>> But according to (ReceivePcapThreadExitStats) I'm still losing about 20%
>> of my packets.  Any ideas on why this may be?  Below is a cut from the
>> suricata.log file showing the packet drops after I increased the memcap
>> values.
>>
>> Increased Flow memcap from 32MB to 1GB
>> No change:
>>
>> [11736] 1/8/2011 -- 13:27:07 - (source-pcap.c:561) <Info>
>> (ReceivePcapThreadExitStats) -- (ReceivePcap) Packets 1784959, bytes
>> 1318154313
>> [11736] 1/8/2011 -- 13:27:07 - (source-pcap.c:569) <Info>
>> (ReceivePcapThreadExitStats) -- (ReceivePcap) Pcap Total:3865595
>> Recv:2825319 Drop:1040276 (26.9%).
>>
>> Increased Stream memcap from 32MB to 1GB
>> Increased Stream reassembly memcap from 64MB to 2GB
>> No change:
>>
>> [11955] 1/8/2011 -- 13:34:38 - (source-pcap.c:561) <Info>
>> (ReceivePcapThreadExitStats) -- (ReceivePcap) Packets 2906643, bytes
>> 1977212962
>> [11955] 1/8/2011 -- 13:34:38 - (source-pcap.c:569) <Info>
>> (ReceivePcapThreadExitStats) -- (ReceivePcap) Pcap Total:5634300
>> Recv:4270513 Drop:1363787 (24.2%).
>>
>> Gene
>>
>>
>> On Fri, Jul 29, 2011 at 8:17 PM, Gene Albin <gene.albin at gmail.com> wrote:
>>      What causes the tcp.segment_memcap_drop and the tcp.ssn_memcap_drop
>> counters to increment in the stats.log file?  I haven't found much of a
>> description or suggestions on what I can do to reduce the number.
>>      Here is a cut from my stats.log file:
>>
>>      tcp.sessions              | Decode & Stream           | 569818
>>      tcp.ssn_memcap_drop       | Decode & Stream           | 0
>>      tcp.pseudo                | Decode & Stream           | 94588
>>      tcp.segment_memcap_drop   | Decode & Stream           | 11204200
>>      tcp.stream_depth_reached  | Decode & Stream           | 14
>>      detect.alert              | Detect                    | 13239
>>
>>      Thanks for any suggestions.
>>
>>      Gene
>>
>>      --
>>      Gene Albin
>>      gene.albin at gmail.com
>>
>>
>>
>>
>> --
>> Gene Albin
>> gene.albin at gmail.com
>>
>>
>> ______________________________**_________________
>> Oisf-users mailing list
>> Oisf-users@**openinfosecfoundation.org<Oisf-users at openinfosecfoundation.org>
>> http://lists.**openinfosecfoundation.org/**mailman/listinfo/oisf-users<http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users>
>>
>>
>>
>>
>>
>>
>>
>> --
>> Gene Albin
>> gene.albin at gmail.com
>>
>>
>>


-- 
Gene Albin
gene.albin at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20110809/7107f0b2/attachment-0002.html>


More information about the Oisf-users mailing list