[Oisf-users] Suricata v2.1beta2 with geoip and high ram consumption
Jay M.
jskier at gmail.com
Mon Jan 5 15:13:42 UTC 2015
Whoops, not sure why I had nfqueue compiled in; I disabled that as well.
With the two changes, I'm still at 6 - 8 GB allocated RAM out the
gate. I turned the updates back to every two hours and set timer unit
to reload instead of restart to see if I can reproduce the problem
some more.
--
Jay
jskier at gmail.com
On Mon, Jan 5, 2015 at 7:50 AM, Jay M. <jskier at gmail.com> wrote:
> On Sun, Jan 4, 2015 at 4:10 AM, Peter Manev <petermanev at gmail.com> wrote:
>>
>> On Fri, Jan 2, 2015 at 2:48 PM, Jay M. <jskier at gmail.com> wrote:
>> > On Thu, Jan 1, 2015 at 10:15 AM, Peter Manev <petermanev at gmail.com>
>> > wrote:
>> >> On Wed, Dec 31, 2014 at 4:13 PM, Jay M. <jskier at gmail.com> wrote:
>> >>> I've been playing around a little with a geoip rule and noticed only
>> >>> when the sole one is enabled, ram is gobbled up quickly (about an
>> >>> hour) and eats into the swap with 16 gigs of ram.
>> >>>
>> >>
>> >> What is the sum total of all your mem settings in suricata.yaml?
>> >
>> > About 16.3 GB if the host memcap is kilobytes. Everything else is
>> > commented out / default. I am hashing all and do store some files,
>> > usually a handful a day.
>> >
>>
>> Ok - so you are using default yaml, correct? You have not changed
>> anything else except maybe the HOME_NET values ?
>> (just so that I can get a better idea of the set up)
>
>
> Mostly default, I upped the memcaps a little to enable hashing and file
> store, and am outputting everything to eve.log and have rule alert debugging
> and stats turned on. I'm also running suricata as it's own user and a
> specific pid file; perhaps this could impact memory management somehow?
>
>>
>>
>> > degrag memcap: 32mb
>> > flow memcap: 64mb
>> > stream memcap: 64mb
>> > stream reassembly: 128 mb
>> > host memcap: 16777216 (16 GB?)
>>
>> The value is in bytes - if not otherwise specified - aka 1000mb.
>>
>> >
>> > I have mitigated the eating in to swap problem for now by changing my
>> > rule update script to run every 6 hours and restart the daemon as
>> > opposed to reloading it (see the other caveat below). I read in the
>> > wiki that rule reloading is still in a delicate state, so this makes
>> > sense.
>> >
>> >>
>> >>> So, I've added more RAM to the VM, from 16 to 24 gigs, I'll see what
>> >>> that does (up to 15 gigs allocated after starting 40 minutes ago).
>> >>>
>> >>> It does not appear to be dropping packets and the rule is working, as
>> >>> well as the ETPRO set. I'm wondering if others using geo rules are
>> >>> also seeing this behavior? I'm not ready to call it a memory leak just
>> >>> yet...
>> >>
>>
>> You are loading a full ETPro ruleset, correct?
>
>
> Correct, full ETPro ruleset.
>
>>
>>
>> >> What amount of traffic are you inspecting?
>> >> Is this reproducible only (and every time) when you enable geoip?
>> >
>> > I am inspecting a 100 meg pipe using rspan, and am monitoring only. On
>> > my virtual host box in VMware 11, I passthru a poor man receiver so to
>> > speak, which is a 1 gig USB3 dongle. Not the most ideal setup I know,
>> > but it actually works fairly well and should hold me off until erspan
>> > span gets implemented in suricata.
>> >
>>
>> Is that 100Mb/s or 100MB/s?
>
>
> Megabits per second.
>>
>>
>> > RAM consumption is quickly reproducible with the one geoip rule
>> > (basically if not US, alert) although there is another gothca I'm
>> > looking into. I noticed my script to reload the rules every four hours
>> > by invoking the kill command (as noted in the wiki) via a systemd unit
>> > also will eat up a lot of RAM (usually 3~4 gig chunks per reload),
>>
>> Live rule reload needs twice the memory to do the rule reload (twice
>> the memory to do the reload procedure for the rulsets)
>
>
> Good to know. But, should it incrementally keep growing upon each reload?
>>
>>
>> > albeit noticeably fewer volume gobbled in time than the geoip rule. I
>> > noticed after a weekend before the geoip rule was deployed this
>> > basically killed suricata because it it ate up all the ram and swap
>> > when I was at 16/8 ram/swap respectively.
>>
>> Can you please share the output of :
>> suricata --build-info?
>
>
> This is at the bottom, second to last. Note this is after recompiling with
> your next suggestion.
>
>>
>> Since it is a virtual machine you might want to try adding
>> "--disable-gccmarch-native"to the configure line when compiling
>> Suricata.
>
>
> Done.
>
>>
>> What are the last stats in stats.log when it goes into swap?
>
>
> You may find this at the very bottom.
>
>>
>>
>> Thanks
>>
>> >
>> >>>
>> >>> Additionally, running 64-bit, ArchLinux 3.17.6 kernel.
>> >>>
>> >>> --
>> >>> Jay
>> >>> jskier at gmail.com
>> >>> _______________________________________________
>> >>> Suricata IDS Users mailing list: oisf-users at openinfosecfoundation.org
>> >>> Site: http://suricata-ids.org | Support:
>> >>> http://suricata-ids.org/support/
>> >>> List:
>> >>> https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>> >>> Training now available: http://suricata-ids.org/training/
>> >>
>> >>
>> >>
>> >> --
>> >> Regards,
>> >> Peter Manev
>> >
>> > --
>> > Jay
>> > jskier at gmail.com
>>
>>
>>
>> --
>> Regards,
>> Peter Manev
>
>
> *****************************************************************************
> Build info:
> This is Suricata version 2.1beta2 RELEASE
> Features: NFQ PCAP_SET_BUFF LIBPCAP_VERSION_MAJOR=1 AF_PACKET
> HAVE_PACKET_FANOUT LIBCAP_NG LIBNET1.1 HAVE_HTP_URI_NORMALIZE_HOOK PCRE_JIT
> HAVE_NSS HAVE_LIBJANSSON
> SIMD support: none
> Atomic intrisics: 1 2 4 8 byte(s)
> 64-bits, Little-endian architecture
> GCC version 4.9.2 20141224 (prerelease), C version 199901
> compiled with _FORTIFY_SOURCE=2
> L1 cache line size (CLS)=64
> compiled with LibHTP v0.5.15, linked against LibHTP v0.5.15
> Suricata Configuration:
> AF_PACKET support: yes
> PF_RING support: no
> NFQueue support: yes
> NFLOG support: no
> IPFW support: no
> DAG enabled: no
> Napatech enabled: no
> Unix socket enabled: yes
> Detection enabled: yes
>
> libnss support: yes
> libnspr support: yes
> libjansson support: yes
> Prelude support: no
> PCRE jit: yes
> LUA support: no
> libluajit: no
> libgeoip: yes
> Non-bundled htp: no
> Old barnyard2 support: no
> CUDA enabled: no
>
> Suricatasc install: no
>
> Unit tests enabled: no
> Debug output enabled: no
> Debug validation enabled: no
> Profiling enabled: no
> Profiling locks enabled: no
> Coccinelle / spatch: no
>
> Generic build parameters:
> Installation prefix (--prefix): /usr
> Configuration directory (--sysconfdir): /etc/suricata/
> Log directory (--localstatedir) : /var/log/suricata/
>
> Host: x86_64-unknown-linux-gnu
> GCC binary: gcc
> GCC Protect enabled: no
> GCC march native enabled: no
> GCC Profile enabled: no
>
> *****************************************************************************
> stats.log
>
> -------------------------------------------------------------------
> Date: 12/29/2014 -- 08:47:16 (uptime: 5d, 22h 11m 16s)
> -------------------------------------------------------------------
> Counter | TM Name | Value
> -------------------------------------------------------------------
> capture.kernel_packets | RxPcaprspan01 | 189319344
> capture.kernel_drops | RxPcaprspan01 | 34155
> capture.kernel_ifdrops | RxPcaprspan01 | 0
> dns.memuse | RxPcaprspan01 | 238516
> dns.memcap_state | RxPcaprspan01 | 0
> dns.memcap_global | RxPcaprspan01 | 0
> decoder.pkts | RxPcaprspan01 | 189284875
> decoder.bytes | RxPcaprspan01 | 67868253003
> decoder.invalid | RxPcaprspan01 | 8
> decoder.ipv4 | RxPcaprspan01 | 189290229
> decoder.ipv6 | RxPcaprspan01 | 2988
> decoder.ethernet | RxPcaprspan01 | 189284875
> decoder.raw | RxPcaprspan01 | 0
> decoder.sll | RxPcaprspan01 | 0
> decoder.tcp | RxPcaprspan01 | 57549996
> decoder.udp | RxPcaprspan01 | 124080607
> decoder.sctp | RxPcaprspan01 | 0
> decoder.icmpv4 | RxPcaprspan01 | 153021
> decoder.icmpv6 | RxPcaprspan01 | 36
> decoder.ppp | RxPcaprspan01 | 0
> decoder.pppoe | RxPcaprspan01 | 0
> decoder.gre | RxPcaprspan01 | 0
> decoder.vlan | RxPcaprspan01 | 0
> decoder.vlan_qinq | RxPcaprspan01 | 0
> decoder.teredo | RxPcaprspan01 | 832
> decoder.ipv4_in_ipv6 | RxPcaprspan01 | 0
> decoder.ipv6_in_ipv6 | RxPcaprspan01 | 0
> decoder.mpls | RxPcaprspan01 | 0
> decoder.avg_pkt_size | RxPcaprspan01 | 358
> decoder.max_pkt_size | RxPcaprspan01 | 1516
> defrag.ipv4.fragments | RxPcaprspan01 | 21739
> defrag.ipv4.reassembled | RxPcaprspan01 | 10857
> defrag.ipv4.timeouts | RxPcaprspan01 | 0
> defrag.ipv6.fragments | RxPcaprspan01 | 0
> defrag.ipv6.reassembled | RxPcaprspan01 | 0
> defrag.ipv6.timeouts | RxPcaprspan01 | 0
> defrag.max_frag_hits | RxPcaprspan01 | 0
> tcp.sessions | Detect | 544723
> tcp.ssn_memcap_drop | Detect | 0
> tcp.pseudo | Detect | 192120
> tcp.pseudo_failed | Detect | 0
> tcp.invalid_checksum | Detect | 0
> tcp.no_flow | Detect | 0
> tcp.reused_ssn | Detect | 124
> tcp.memuse | Detect | 379008
> tcp.syn | Detect | 566080
> tcp.synack | Detect | 510273
> tcp.rst | Detect | 210377
> dns.memuse | Detect | 303480
> dns.memcap_state | Detect | 0
> dns.memcap_global | Detect | 0
> tcp.segment_memcap_drop | Detect | 0
> tcp.stream_depth_reached | Detect | 0
> tcp.reassembly_memuse | Detect | 74263464
> tcp.reassembly_gap | Detect | 104
> http.memuse | Detect | 548522868
> http.memcap | Detect | 0
> detect.alert | Detect | 11032
> flow_mgr.closed_pruned | FlowManagerThread | 503125
> flow_mgr.new_pruned | FlowManagerThread | 53352
> flow_mgr.est_pruned | FlowManagerThread | 336649
> flow.memuse | FlowManagerThread | 12900272
> flow.spare | FlowManagerThread | 10000
> flow.emerg_mode_entered | FlowManagerThread | 0
> flow.emerg_mode_over | FlowManagerThread | 0
>
>
> --
> Jay
> jskier at gmail.com
More information about the Oisf-users
mailing list