<span style>Peter,</span><div style>Looks to be rev 9f7588a (was the latest git at the time, about a week ago?)</div><div style><br></div><div style>output from suricata --build-info:</div><div style><div>[12882] 12/6/2012 -- 14:23:26 - (suricata.c:503) <Info> (SCPrintBuildInfo) -- This is Suricata version 1.3dev (rev 9f7588a)</div>

<div>[12882] 12/6/2012 -- 14:23:26 - (suricata.c:576) <Info> (SCPrintBuildInfo) -- Features: PCAP_SET_BUFF LIBPCAP_VERSION_MAJOR=1 PF_RING AF_PACKET HAVE_PACKET_FANOUT LIBCAP_NG LIBNET1.1 HAVE_HTP_URI_NORMALIZE_HOOK HAVE_HTP_TX_GET_RESPONSE_HEADERS_RAW PCRE_JIT HAVE_NSS</div>

<div>[12882] 12/6/2012 -- 14:23:26 - (suricata.c:590) <Info> (SCPrintBuildInfo) -- 64-bits, Little-endian architecture</div><div>[12882] 12/6/2012 -- 14:23:26 - (suricata.c:592) <Info> (SCPrintBuildInfo) -- GCC version 4.5.2, C version 199901</div>

<div>[12882] 12/6/2012 -- 14:23:26 - (suricata.c:598) <Info> (SCPrintBuildInfo) -- __GCC_HAVE_SYNC_COMPARE_AND_SWAP_1</div><div>[12882] 12/6/2012 -- 14:23:26 - (suricata.c:601) <Info> (SCPrintBuildInfo) -- __GCC_HAVE_SYNC_COMPARE_AND_SWAP_2</div>

<div>[12882] 12/6/2012 -- 14:23:26 - (suricata.c:604) <Info> (SCPrintBuildInfo) -- __GCC_HAVE_SYNC_COMPARE_AND_SWAP_4</div><div>[12882] 12/6/2012 -- 14:23:26 - (suricata.c:607) <Info> (SCPrintBuildInfo) -- __GCC_HAVE_SYNC_COMPARE_AND_SWAP_8</div>

<div>[12882] 12/6/2012 -- 14:23:26 - (suricata.c:610) <Info> (SCPrintBuildInfo) -- __GCC_HAVE_SYNC_COMPARE_AND_SWAP_16</div><div>[12882] 12/6/2012 -- 14:23:26 - (suricata.c:614) <Info> (SCPrintBuildInfo) -- compiled with -fstack-protector</div>

<div>[12882] 12/6/2012 -- 14:23:26 - (suricata.c:620) <Info> (SCPrintBuildInfo) -- compiled with _FORTIFY_SOURCE=2</div><div><br></div><div><br></div><div>Victor:</div><div>Honestly, its hard to say. I'll try to correlated the drops to less than expected logs.</div>

<div><br></div><div>I let it ran over the weekend. It seems to have an inverse relationship with the traffic I see, to the number of files logged. Sat and Sunday seem to log more consistently than weekdays. See graph below.<img src="cid:ii_137e23915d73fa36" alt="Inline image 1"></div>

<br><div class="gmail_quote">Maybe the box can't handle the traffic? Thanks for all the help.</div></div><br><div class="gmail_quote">On Mon, Jun 11, 2012 at 11:55 AM, Victor Julien <span dir="ltr"><<a href="mailto:victor@inliniac.net" target="_blank">victor@inliniac.net</a>></span> wrote:<br>

<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="im">On 06/08/2012 09:54 PM, Brandon Ganem wrote:<br>
> Changed, seems to have made a huge difference. Thank you!<br>
><br>
> I'm not sure if this is related, but i've got suricata configured to md5<br>
> all files coming across the wire. At start-up it does ~ 7 to 10k a<br>
> minute for just a few minutes then it tappers off until it gets to<br>
> almost zero files hashed every minute. Alerts do not seem to be affected.<br>
<br>
</div>Does there appear to be a correlation with the _drop counters in your<br>
stats.log when that happens?<br>
<br>
Thats the only thing I can think of (other than bugs).<br>
<br>
Cheers,<br>
Victor<br>
<div class="im"><br>
<br>
> Sorry for bombarding the list with questions and thank you for the help<br>
> so far.<br>
><br>
> On Fri, Jun 8, 2012 at 2:14 PM, Victor Julien <<a href="mailto:victor@inliniac.net">victor@inliniac.net</a><br>
</div><div class="im">> <mailto:<a href="mailto:victor@inliniac.net">victor@inliniac.net</a>>> wrote:<br>
><br>
>     This may be caused by another option that is only mentioned in the<br>
>     comment block above the stream settings in your yaml:<br>
><br>
>     #   max-sessions: 262144        # 256k concurrent sessions<br>
>     #   prealloc-sessions: 32768    # 32k sessions prealloc'd<br>
><br>
>     Max sessions puts a limit to the max number of concurrent tcp sessions<br>
>     tracked.<br>
><br>
>     Try setting it to something like:<br>
><br>
>     stream:<br>
>      max-sessions: 1000000<br>
>      prealloc-sessions: 500000<br>
><br>
>     Or something :)<br>
><br>
>     On 06/08/2012 07:24 PM, Brandon Ganem wrote:<br>
>     > It looks like *tcp.ssn_memcap_drop       | Detect                    |<br>
>     > 6019 *is starting to add up now too.<br>
>     ><br>
>     > Thanks!<br>
>     ><br>
>     > On Fri, Jun 8, 2012 at 1:09 PM, Brandon Ganem<br>
>     > <<a href="mailto:brandonganem%2Boisf@gmail.com">brandonganem+oisf@gmail.com</a><br>
</div>>     <mailto:<a href="mailto:brandonganem%252Boisf@gmail.com">brandonganem%2Boisf@gmail.com</a>><br>
>     <mailto:<a href="mailto:brandonganem%2Boisf@gmail.com">brandonganem+oisf@gmail.com</a><br>
<div><div class="h5">>     <mailto:<a href="mailto:brandonganem%252Boisf@gmail.com">brandonganem%2Boisf@gmail.com</a>>>> wrote:<br>
>     ><br>
>     >     /Up your memcap settings to 4GB each and see if the numbers<br>
>     improve.<br>
>     >     Both memcap drop stats should be zero when everything's right. /<br>
>     >     Done<br>
>     ><br>
>     >     /This is odd. Your stream related memcap is 1GB, yet this<br>
>     shows 6GB in<br>
>     >     use? Which again doesn't seem to match the memory usage you<br>
>     seem to be<br>
>     >     seeing for the whole process. Smells like a bug to me... /<br>
>     >     /<br>
>     >     /<br>
>     >     Let me know if you want me to compile in some debugging<br>
>     features. If<br>
>     >     I can provide any additional information let me know.<br>
>     ><br>
>     >     CPU / MEM: ~50-125% (similar to before) ~2-2.6GB(similar as well.)<br>
>     >     Suricata has only been running for a few minutes, but here is<br>
>     a new<br>
>     >     stats.log:<br>
>     ><br>
>     >     tcp.sessions              | Detect                    | 464890<br>
>     >     *tcp.ssn_memcap_drop       | Detect                    | 0 (maybe<br>
>     >     better, it may have to run for a while to start adding up<br>
>     though?)*<br>
>     >     tcp.pseudo                | Detect                    | 10567<br>
>     >     tcp.invalid_checksum      | Detect                    | 0<br>
>     >     tcp.no_flow               | Detect                    | 0<br>
>     >     tcp.reused_ssn            | Detect                    | 0<br>
>     >     tcp.memuse                | Detect                    | 141604560<br>
>     >     tcp.syn                   | Detect                    | 465555<br>
>     >     tcp.synack                | Detect                    | 233829<br>
>     >     tcp.rst                   | Detect                    | 46181<br>
>     >     *tcp.segment_memcap_drop   | Detect                    |<br>
>     1281114 (I<br>
>     >     don't think this is impoving)*<br>
>     >     *tcp.stream_depth_reached  | Detect                    | 70<br>
>     >      (Looks like this is still going up*<br>
>     >     tcp.reassembly_memuse     | Detect                    | 6442450806<br>
>     >          *(still 6GB not 4GB)*<br>
>     >     *tcp.reassembly_gap        | Detect                    | 44583<br>
>     >     (Still going up)*<br>
>     >     detect.alert              | Detect                    | 25<br>
>     >     flow_mgr.closed_pruned    | FlowManagerThread         | 150973<br>
>     >     flow_mgr.new_pruned       | FlowManagerThread         | 207334<br>
>     >     flow_mgr.est_pruned       | FlowManagerThread         | 0<br>
>     >     flow.memuse               | FlowManagerThread         | 41834880<br>
>     >     flow.spare                | FlowManagerThread         | 10742<br>
>     >     flow.emerg_mode_entered   | FlowManagerThread         | 0<br>
>     >     flow.emerg_mode_over      | FlowManagerThread         | 0<br>
>     >     decoder.pkts              | RxPFR1                    | 17310168<br>
>     >     decoder.bytes             | RxPFR1                    | 7387022602<br>
>     >     decoder.ipv4              | RxPFR1                    | 17309598<br>
>     >     decoder.ipv6              | RxPFR1                    | 0<br>
>     >     decoder.ethernet          | RxPFR1                    | 17310168<br>
>     >     decoder.raw               | RxPFR1                    | 0<br>
>     >     decoder.sll               | RxPFR1                    | 0<br>
>     >     decoder.tcp               | RxPFR1                    | 15519823<br>
>     >     decoder.udp               | RxPFR1                    | 210<br>
>     >     decoder.sctp              | RxPFR1                    | 0<br>
>     >     decoder.icmpv4            | RxPFR1                    | 1323<br>
>     >     decoder.icmpv6            | RxPFR1                    | 0<br>
>     >     decoder.ppp               | RxPFR1                    | 0<br>
>     >     decoder.pppoe             | RxPFR1                    | 0<br>
>     >     decoder.gre               | RxPFR1                    | 0<br>
>     >     decoder.vlan              | RxPFR1                    | 0<br>
>     >     decoder.avg_pkt_size      | RxPFR1                    | 427<br>
>     >     decoder.max_pkt_size      | RxPFR1                    | 1516<br>
>     >     defrag.ipv4.fragments     | RxPFR1                    | 15<br>
>     >     defrag.ipv4.reassembled   | RxPFR1                    | 5<br>
>     >     defrag.ipv4.timeouts      | RxPFR1                    | 0<br>
>     >     defrag.ipv6.fragments     | RxPFR1                    | 0<br>
>     >     defrag.ipv6.reassembled   | RxPFR1                    | 0<br>
>     >     defrag.ipv6.timeouts      | RxPFR1                    | 0<br>
>     ><br>
>     ><br>
>     >     Here's what has been changed in the cfg:<br>
>     ><br>
>     >     flow:<br>
>     >     *  memcap: 4gb*<br>
>     >       hash-size: 65536<br>
>     >       prealloc: 10000<br>
>     >       emergency-recovery: 30<br>
>     >       prune-flows: 5<br>
>     ><br>
>     >     stream:<br>
>     >     *  memcap: 4gb*<br>
>     ><br>
>     >     On Fri, Jun 8, 2012 at 12:31 PM, Victor Julien<br>
>     <<a href="mailto:victor@inliniac.net">victor@inliniac.net</a> <mailto:<a href="mailto:victor@inliniac.net">victor@inliniac.net</a>><br>
</div></div><div class="im">>     >     <mailto:<a href="mailto:victor@inliniac.net">victor@inliniac.net</a> <mailto:<a href="mailto:victor@inliniac.net">victor@inliniac.net</a>>>> wrote:<br>
>     ><br>
>     >         On 06/08/2012 05:59 PM, Brandon Ganem wrote:<br>
>     >         > tcp.reassembly_memuse     | Detect                    |<br>
>     6442450854<br>
>     ><br>
>     >         This is odd. Your stream related memcap is 1GB, yet this shows<br>
>     >         6GB in<br>
>     >         use? Which again doesn't seem to match the memory usage<br>
>     you seem<br>
>     >         to be<br>
>     >         seeing for the whole process. Smells like a bug to me...<br>
>     ><br>
>     >         --<br>
>     >         ---------------------------------------------<br>
>     >         Victor Julien<br>
>     >         <a href="http://www.inliniac.net/" target="_blank">http://www.inliniac.net/</a><br>
>     >         PGP: <a href="http://www.inliniac.net/victorjulien.asc" target="_blank">http://www.inliniac.net/victorjulien.asc</a><br>
>     >         ---------------------------------------------<br>
>     ><br>
>     >         _______________________________________________<br>
>     >         Oisf-users mailing list<br>
>     >         <a href="mailto:Oisf-users@openinfosecfoundation.org">Oisf-users@openinfosecfoundation.org</a><br>
>     <mailto:<a href="mailto:Oisf-users@openinfosecfoundation.org">Oisf-users@openinfosecfoundation.org</a>><br>
</div>>     >         <mailto:<a href="mailto:Oisf-users@openinfosecfoundation.org">Oisf-users@openinfosecfoundation.org</a><br>
<div class="im HOEnZb">>     <mailto:<a href="mailto:Oisf-users@openinfosecfoundation.org">Oisf-users@openinfosecfoundation.org</a>>><br>
>     ><br>
>     <a href="http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users" target="_blank">http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>
>     ><br>
>     ><br>
>     ><br>
><br>
><br>
>     --<br>
>     ---------------------------------------------<br>
>     Victor Julien<br>
>     <a href="http://www.inliniac.net/" target="_blank">http://www.inliniac.net/</a><br>
>     PGP: <a href="http://www.inliniac.net/victorjulien.asc" target="_blank">http://www.inliniac.net/victorjulien.asc</a><br>
>     ---------------------------------------------<br>
><br>
><br>
<br>
<br>
</div><div class="HOEnZb"><div class="h5">--<br>
---------------------------------------------<br>
Victor Julien<br>
<a href="http://www.inliniac.net/" target="_blank">http://www.inliniac.net/</a><br>
PGP: <a href="http://www.inliniac.net/victorjulien.asc" target="_blank">http://www.inliniac.net/victorjulien.asc</a><br>
---------------------------------------------<br>
<br>
</div></div></blockquote></div><br>