Hi,<br><br><div class="gmail_quote">On Fri, Jun 8, 2012 at 9:56 PM, Brandon Ganem <span dir="ltr"><<a href="mailto:brandonganem+oisf@gmail.com" target="_blank">brandonganem+oisf@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<span>Changed, seems to have made a huge difference. Thank you!</span><div><br></div><div>I'm not sure if this is related, but i've got suricata configured to md5 all files coming across the wire.</div></blockquote>
<div>how do you have that configured?<br> </div><blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div> At start-up it does ~ 7 to 10k a minute for just a few minutes then it tappers off until it gets to almost zero files hashed every minute. Alerts do not seem to be affected.</div>
</blockquote><div>do you use md5 sigs?<br> </div><blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">

<div><br></div><div>Sorry for bombarding the list with questions and thank you for the help.</div><div class="HOEnZb"><div class="h5"><br><div class="gmail_quote">On Fri, Jun 8, 2012 at 2:14 PM, Victor Julien <span dir="ltr"><<a href="mailto:victor@inliniac.net" target="_blank">victor@inliniac.net</a>></span> wrote:<br>


<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">This may be caused by another option that is only mentioned in the<br>
comment block above the stream settings in your yaml:<br>
<br>
#   max-sessions: 262144        # 256k concurrent sessions<br>
#   prealloc-sessions: 32768    # 32k sessions prealloc'd<br>
<br>
Max sessions puts a limit to the max number of concurrent tcp sessions<br>
tracked.<br>
<br>
Try setting it to something like:<br>
<br>
stream:<br>
  max-sessions: 1000000<br>
  prealloc-sessions: 500000<br>
<br>
Or something :)<br>
<br>
On 06/08/2012 07:24 PM, Brandon Ganem wrote:<br>
> It looks like *tcp.ssn_memcap_drop       | Detect                    |<br>
> 6019 *is starting to add up now too.<br>
<div>><br>
> Thanks!<br>
><br>
> On Fri, Jun 8, 2012 at 1:09 PM, Brandon Ganem<br>
</div>> <<a href="mailto:brandonganem%2Boisf@gmail.com" target="_blank">brandonganem+oisf@gmail.com</a> <mailto:<a href="mailto:brandonganem%2Boisf@gmail.com" target="_blank">brandonganem+oisf@gmail.com</a>>> wrote:<br>

><br>
>     /Up your memcap settings to 4GB each and see if the numbers improve.<br>
>     Both memcap drop stats should be zero when everything's right. /<br>
>     Done<br>
><br>
>     /This is odd. Your stream related memcap is 1GB, yet this shows 6GB in<br>
<div>>     use? Which again doesn't seem to match the memory usage you seem to be<br>
</div>>     seeing for the whole process. Smells like a bug to me... /<br>
>     /<br>
>     /<br>
<div>>     Let me know if you want me to compile in some debugging features. If<br>
>     I can provide any additional information let me know.<br>
><br>
>     CPU / MEM: ~50-125% (similar to before) ~2-2.6GB(similar as well.)<br>
>     Suricata has only been running for a few minutes, but here is a new<br>
>     stats.log:<br>
><br>
>     tcp.sessions              | Detect                    | 464890<br>
</div>>     *tcp.ssn_memcap_drop       | Detect                    | 0 (maybe<br>
>     better, it may have to run for a while to start adding up though?)*<br>
<div>>     tcp.pseudo                | Detect                    | 10567<br>
>     tcp.invalid_checksum      | Detect                    | 0<br>
>     tcp.no_flow               | Detect                    | 0<br>
>     tcp.reused_ssn            | Detect                    | 0<br>
>     tcp.memuse                | Detect                    | 141604560<br>
>     tcp.syn                   | Detect                    | 465555<br>
>     tcp.synack                | Detect                    | 233829<br>
>     tcp.rst                   | Detect                    | 46181<br>
</div>>     *tcp.segment_memcap_drop   | Detect                    | 1281114 (I<br>
>     don't think this is impoving)*<br>
>     *tcp.stream_depth_reached  | Detect                    | 70<br>
>      (Looks like this is still going up*<br>
>     tcp.reassembly_memuse     | Detect                    | 6442450806<br>
>          *(still 6GB not 4GB)*<br>
>     *tcp.reassembly_gap        | Detect                    | 44583<br>
>     (Still going up)*<br>
<div><div>>     detect.alert              | Detect                    | 25<br>
>     flow_mgr.closed_pruned    | FlowManagerThread         | 150973<br>
>     flow_mgr.new_pruned       | FlowManagerThread         | 207334<br>
>     flow_mgr.est_pruned       | FlowManagerThread         | 0<br>
>     flow.memuse               | FlowManagerThread         | 41834880<br>
>     flow.spare                | FlowManagerThread         | 10742<br>
>     flow.emerg_mode_entered   | FlowManagerThread         | 0<br>
>     flow.emerg_mode_over      | FlowManagerThread         | 0<br>
>     decoder.pkts              | RxPFR1                    | 17310168<br>
>     decoder.bytes             | RxPFR1                    | 7387022602<br>
>     decoder.ipv4              | RxPFR1                    | 17309598<br>
>     decoder.ipv6              | RxPFR1                    | 0<br>
>     decoder.ethernet          | RxPFR1                    | 17310168<br>
>     decoder.raw               | RxPFR1                    | 0<br>
>     decoder.sll               | RxPFR1                    | 0<br>
>     decoder.tcp               | RxPFR1                    | 15519823<br>
>     decoder.udp               | RxPFR1                    | 210<br>
>     decoder.sctp              | RxPFR1                    | 0<br>
>     decoder.icmpv4            | RxPFR1                    | 1323<br>
>     decoder.icmpv6            | RxPFR1                    | 0<br>
>     decoder.ppp               | RxPFR1                    | 0<br>
>     decoder.pppoe             | RxPFR1                    | 0<br>
>     decoder.gre               | RxPFR1                    | 0<br>
>     decoder.vlan              | RxPFR1                    | 0<br>
>     decoder.avg_pkt_size      | RxPFR1                    | 427<br>
>     decoder.max_pkt_size      | RxPFR1                    | 1516<br>
>     defrag.ipv4.fragments     | RxPFR1                    | 15<br>
>     defrag.ipv4.reassembled   | RxPFR1                    | 5<br>
>     defrag.ipv4.timeouts      | RxPFR1                    | 0<br>
>     defrag.ipv6.fragments     | RxPFR1                    | 0<br>
>     defrag.ipv6.reassembled   | RxPFR1                    | 0<br>
>     defrag.ipv6.timeouts      | RxPFR1                    | 0<br>
><br>
><br>
>     Here's what has been changed in the cfg:<br>
><br>
>     flow:<br>
</div></div>>     *  memcap: 4gb*<br>
<div>>       hash-size: 65536<br>
>       prealloc: 10000<br>
>       emergency-recovery: 30<br>
>       prune-flows: 5<br>
><br>
>     stream:<br>
</div>>     *  memcap: 4gb*<br>
<div>><br>
>     On Fri, Jun 8, 2012 at 12:31 PM, Victor Julien <<a href="mailto:victor@inliniac.net" target="_blank">victor@inliniac.net</a><br>
</div><div>>     <mailto:<a href="mailto:victor@inliniac.net" target="_blank">victor@inliniac.net</a>>> wrote:<br>
><br>
>         On 06/08/2012 05:59 PM, Brandon Ganem wrote:<br>
>         > tcp.reassembly_memuse     | Detect                    | 6442450854<br>
><br>
>         This is odd. Your stream related memcap is 1GB, yet this shows<br>
>         6GB in<br>
>         use? Which again doesn't seem to match the memory usage you seem<br>
>         to be<br>
>         seeing for the whole process. Smells like a bug to me...<br>
><br>
>         --<br>
>         ---------------------------------------------<br>
>         Victor Julien<br>
>         <a href="http://www.inliniac.net/" target="_blank">http://www.inliniac.net/</a><br>
>         PGP: <a href="http://www.inliniac.net/victorjulien.asc" target="_blank">http://www.inliniac.net/victorjulien.asc</a><br>
>         ---------------------------------------------<br>
><br>
>         _______________________________________________<br>
>         Oisf-users mailing list<br>
>         <a href="mailto:Oisf-users@openinfosecfoundation.org" target="_blank">Oisf-users@openinfosecfoundation.org</a><br>
</div>>         <mailto:<a href="mailto:Oisf-users@openinfosecfoundation.org" target="_blank">Oisf-users@openinfosecfoundation.org</a>><br>
>         <a href="http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users" target="_blank">http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>
<div><div>><br>
><br>
><br>
<br>
<br>
--<br>
---------------------------------------------<br>
Victor Julien<br>
<a href="http://www.inliniac.net/" target="_blank">http://www.inliniac.net/</a><br>
PGP: <a href="http://www.inliniac.net/victorjulien.asc" target="_blank">http://www.inliniac.net/victorjulien.asc</a><br>
---------------------------------------------<br>
<br>
</div></div></blockquote></div><br>
</div></div><br>_______________________________________________<br>
Oisf-users mailing list<br>
<a href="mailto:Oisf-users@openinfosecfoundation.org">Oisf-users@openinfosecfoundation.org</a><br>
<a href="http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users" target="_blank">http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>
<br></blockquote></div><br><br clear="all"><br>-- <br><div>Regards,</div>
<div>Peter Manev</div><br>