Hi Brandon - which Suricata rev/release are you using?<br><br><div class="gmail_quote">On Sun, Jun 10, 2012 at 5:07 PM, Brandon Ganem <span dir="ltr"><<a href="mailto:brandonganem+oisf@gmail.com" target="_blank">brandonganem+oisf@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Correct, although it will still log one or two files per minute some times. <div>At start up it logs 7k-10k files a minute for about 3-5 minutes, gradually reducing the number of files logged until it hits 1-2 a minute, sometimes none for a large span of time.</div>
<div><br></div><div>My apologies if i'm not describing this well.<div><div class="h5"><br><br><div class="gmail_quote">On Sun, Jun 10, 2012 at 4:31 AM, Peter Manev <span dir="ltr"><<a href="mailto:petermanev@gmail.com" target="_blank">petermanev@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Brandon,<br>
<br>
Ok. I am not sure what is the issue? -<br>
>From what I understood - it logs md5s for a period of time and then<br>
stops , do i understand correct?<br>
<br>
thank you<br>
<div><br>
On 6/9/2012 6:05 PM, Brandon Ganem wrote:<br>
> Peter,<br>
><br>
> #output module to log files tracked in a easily parsable json format<br>
> - file-log:<br>
> enabled: yes<br>
> filename: files-json.log<br>
> append: yes<br>
> #filetype: regular # 'regular', 'unix_stream' or 'unix_dgram'<br>
> force-magic: yes # force logging magic on all logged files<br>
> force-md5: yes # force logging of md5 checksums<br>
><br>
</div>> *do you use md5 sigs?*<br>
<div><div>> No. I pretty much followed the setup from here:<br>
> <a href="https://redmine.openinfosecfoundation.org/projects/suricata/wiki/MD5" target="_blank">https://redmine.openinfosecfoundation.org/projects/suricata/wiki/MD5</a><br>
> The very bottom heading. I do have file store enabled in case there is a<br>
> situation where I want to start plucking specific files off the wire. Could<br>
> that be causing the issue?<br>
><br>
> On Sat, Jun 9, 2012 at 3:09 AM, Peter Manev <<a href="mailto:petermanev@gmail.com" target="_blank">petermanev@gmail.com</a>> wrote:<br>
><br>
>> Hi,<br>
>><br>
>> On Fri, Jun 8, 2012 at 9:56 PM, Brandon Ganem <<a href="mailto:brandonganem%2Boisf@gmail.com" target="_blank">brandonganem+oisf@gmail.com</a><br>
>>> wrote:<br>
>>> Changed, seems to have made a huge difference. Thank you!<br>
>>><br>
>>> I'm not sure if this is related, but i've got suricata configured to md5<br>
>>> all files coming across the wire.<br>
>>><br>
>> how do you have that configured?<br>
>><br>
>><br>
>>> At start-up it does ~ 7 to 10k a minute for just a few minutes then it<br>
>>> tappers off until it gets to almost zero files hashed every minute. Alerts<br>
>>> do not seem to be affected.<br>
>>><br>
>> do you use md5 sigs?<br>
>><br>
>><br>
>>> Sorry for bombarding the list with questions and thank you for the help.<br>
>>><br>
>>> On Fri, Jun 8, 2012 at 2:14 PM, Victor Julien <<a href="mailto:victor@inliniac.net" target="_blank">victor@inliniac.net</a>>wrote:<br>
>>><br>
>>>> This may be caused by another option that is only mentioned in the<br>
>>>> comment block above the stream settings in your yaml:<br>
>>>><br>
>>>> # max-sessions: 262144 # 256k concurrent sessions<br>
>>>> # prealloc-sessions: 32768 # 32k sessions prealloc'd<br>
>>>><br>
>>>> Max sessions puts a limit to the max number of concurrent tcp sessions<br>
>>>> tracked.<br>
>>>><br>
>>>> Try setting it to something like:<br>
>>>><br>
>>>> stream:<br>
>>>> max-sessions: 1000000<br>
>>>> prealloc-sessions: 500000<br>
>>>><br>
>>>> Or something :)<br>
>>>><br>
>>>> On 06/08/2012 07:24 PM, Brandon Ganem wrote:<br>
>>>>> It looks like *tcp.ssn_memcap_drop | Detect |<br>
>>>>> 6019 *is starting to add up now too.<br>
>>>>><br>
>>>>> Thanks!<br>
>>>>><br>
>>>>> On Fri, Jun 8, 2012 at 1:09 PM, Brandon Ganem<br>
>>>>> <<a href="mailto:brandonganem%2Boisf@gmail.com" target="_blank">brandonganem+oisf@gmail.com</a> <mailto:<a href="mailto:brandonganem%2Boisf@gmail.com" target="_blank">brandonganem+oisf@gmail.com</a>>><br>
>>>> wrote:<br>
>>>>> /Up your memcap settings to 4GB each and see if the numbers<br>
>>>> improve.<br>
>>>>> Both memcap drop stats should be zero when everything's right. /<br>
>>>>> Done<br>
>>>>><br>
>>>>> /This is odd. Your stream related memcap is 1GB, yet this shows<br>
>>>> 6GB in<br>
>>>>> use? Which again doesn't seem to match the memory usage you seem<br>
>>>> to be<br>
>>>>> seeing for the whole process. Smells like a bug to me... /<br>
>>>>> /<br>
>>>>> /<br>
>>>>> Let me know if you want me to compile in some debugging features.<br>
>>>> If<br>
>>>>> I can provide any additional information let me know.<br>
>>>>><br>
>>>>> CPU / MEM: ~50-125% (similar to before) ~2-2.6GB(similar as well.)<br>
>>>>> Suricata has only been running for a few minutes, but here is a new<br>
>>>>> stats.log:<br>
>>>>><br>
>>>>> tcp.sessions | Detect | 464890<br>
>>>>> *tcp.ssn_memcap_drop | Detect | 0 (maybe<br>
>>>>> better, it may have to run for a while to start adding up though?)*<br>
>>>>> tcp.pseudo | Detect | 10567<br>
>>>>> tcp.invalid_checksum | Detect | 0<br>
>>>>> tcp.no_flow | Detect | 0<br>
>>>>> tcp.reused_ssn | Detect | 0<br>
>>>>> tcp.memuse | Detect | 141604560<br>
>>>>> tcp.syn | Detect | 465555<br>
>>>>> tcp.synack | Detect | 233829<br>
>>>>> tcp.rst | Detect | 46181<br>
>>>>> *tcp.segment_memcap_drop | Detect | 1281114 (I<br>
>>>>> don't think this is impoving)*<br>
>>>>> *tcp.stream_depth_reached | Detect | 70<br>
>>>>> (Looks like this is still going up*<br>
>>>>> tcp.reassembly_memuse | Detect | 6442450806<br>
>>>>> *(still 6GB not 4GB)*<br>
>>>>> *tcp.reassembly_gap | Detect | 44583<br>
>>>>> (Still going up)*<br>
>>>>> detect.alert | Detect | 25<br>
>>>>> flow_mgr.closed_pruned | FlowManagerThread | 150973<br>
>>>>> flow_mgr.new_pruned | FlowManagerThread | 207334<br>
>>>>> flow_mgr.est_pruned | FlowManagerThread | 0<br>
>>>>> flow.memuse | FlowManagerThread | 41834880<br>
>>>>> flow.spare | FlowManagerThread | 10742<br>
>>>>> flow.emerg_mode_entered | FlowManagerThread | 0<br>
>>>>> flow.emerg_mode_over | FlowManagerThread | 0<br>
>>>>> decoder.pkts | RxPFR1 | 17310168<br>
>>>>> decoder.bytes | RxPFR1 | 7387022602<br>
>>>>> decoder.ipv4 | RxPFR1 | 17309598<br>
>>>>> decoder.ipv6 | RxPFR1 | 0<br>
>>>>> decoder.ethernet | RxPFR1 | 17310168<br>
>>>>> decoder.raw | RxPFR1 | 0<br>
>>>>> decoder.sll | RxPFR1 | 0<br>
>>>>> decoder.tcp | RxPFR1 | 15519823<br>
>>>>> decoder.udp | RxPFR1 | 210<br>
>>>>> decoder.sctp | RxPFR1 | 0<br>
>>>>> decoder.icmpv4 | RxPFR1 | 1323<br>
>>>>> decoder.icmpv6 | RxPFR1 | 0<br>
>>>>> decoder.ppp | RxPFR1 | 0<br>
>>>>> decoder.pppoe | RxPFR1 | 0<br>
>>>>> decoder.gre | RxPFR1 | 0<br>
>>>>> decoder.vlan | RxPFR1 | 0<br>
>>>>> decoder.avg_pkt_size | RxPFR1 | 427<br>
>>>>> decoder.max_pkt_size | RxPFR1 | 1516<br>
>>>>> defrag.ipv4.fragments | RxPFR1 | 15<br>
>>>>> defrag.ipv4.reassembled | RxPFR1 | 5<br>
>>>>> defrag.ipv4.timeouts | RxPFR1 | 0<br>
>>>>> defrag.ipv6.fragments | RxPFR1 | 0<br>
>>>>> defrag.ipv6.reassembled | RxPFR1 | 0<br>
>>>>> defrag.ipv6.timeouts | RxPFR1 | 0<br>
>>>>><br>
>>>>><br>
>>>>> Here's what has been changed in the cfg:<br>
>>>>><br>
>>>>> flow:<br>
>>>>> * memcap: 4gb*<br>
>>>>> hash-size: 65536<br>
>>>>> prealloc: 10000<br>
>>>>> emergency-recovery: 30<br>
>>>>> prune-flows: 5<br>
>>>>><br>
>>>>> stream:<br>
>>>>> * memcap: 4gb*<br>
>>>>><br>
>>>>> On Fri, Jun 8, 2012 at 12:31 PM, Victor Julien <<br>
>>>> <a href="mailto:victor@inliniac.net" target="_blank">victor@inliniac.net</a><br>
>>>>> <mailto:<a href="mailto:victor@inliniac.net" target="_blank">victor@inliniac.net</a>>> wrote:<br>
>>>>><br>
>>>>> On 06/08/2012 05:59 PM, Brandon Ganem wrote:<br>
>>>>> > tcp.reassembly_memuse | Detect |<br>
>>>> 6442450854<br>
>>>>> This is odd. Your stream related memcap is 1GB, yet this shows<br>
>>>>> 6GB in<br>
>>>>> use? Which again doesn't seem to match the memory usage you<br>
>>>> seem<br>
>>>>> to be<br>
>>>>> seeing for the whole process. Smells like a bug to me...<br>
>>>>><br>
>>>>> --<br>
>>>>> ---------------------------------------------<br>
>>>>> Victor Julien<br>
>>>>> <a href="http://www.inliniac.net/" target="_blank">http://www.inliniac.net/</a><br>
>>>>> PGP: <a href="http://www.inliniac.net/victorjulien.asc" target="_blank">http://www.inliniac.net/victorjulien.asc</a><br>
>>>>> ---------------------------------------------<br>
>>>>><br>
>>>>> _______________________________________________<br>
>>>>> Oisf-users mailing list<br>
>>>>> <a href="mailto:Oisf-users@openinfosecfoundation.org" target="_blank">Oisf-users@openinfosecfoundation.org</a><br>
>>>>> <mailto:<a href="mailto:Oisf-users@openinfosecfoundation.org" target="_blank">Oisf-users@openinfosecfoundation.org</a>><br>
>>>>><br>
>>>> <a href="http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users" target="_blank">http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>
>>>>><br>
>>>>><br>
>>>><br>
>>>> --<br>
>>>> ---------------------------------------------<br>
>>>> Victor Julien<br>
>>>> <a href="http://www.inliniac.net/" target="_blank">http://www.inliniac.net/</a><br>
>>>> PGP: <a href="http://www.inliniac.net/victorjulien.asc" target="_blank">http://www.inliniac.net/victorjulien.asc</a><br>
>>>> ---------------------------------------------<br>
>>>><br>
>>>><br>
>>> _______________________________________________<br>
>>> Oisf-users mailing list<br>
>>> <a href="mailto:Oisf-users@openinfosecfoundation.org" target="_blank">Oisf-users@openinfosecfoundation.org</a><br>
>>> <a href="http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users" target="_blank">http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>
>>><br>
>>><br>
>><br>
>> --<br>
>> Regards,<br>
>> Peter Manev<br>
>><br>
>><br>
<br>
<br>
</div></div><span><font color="#888888">--<br>
Regards,<br>
Peter Manev<br>
<br>
</font></span></blockquote></div><br></div></div></div>
</blockquote></div><br><br clear="all"><br>-- <br><div>Regards,</div>
<div>Peter Manev</div><br>