[Oisf-users] Couple of questions regarding stats.log
Brandon Ganem
brandonganem+oisf at gmail.com
Sat Jun 9 16:13:34 UTC 2012
Peter,
#output module to log files tracked in a easily parsable json format
- file-log:
enabled: yes
filename: files-json.log
append: yes
#filetype: regular # 'regular', 'unix_stream' or 'unix_dgram'
force-magic: yes # force logging magic on all logged files
force-md5: yes # force logging of md5 checksums
*do you use md5 sigs?*
No. I pretty much followed the setup from here:
https://redmine.openinfosecfoundation.org/projects/suricata/wiki/MD5
The very bottom heading. I do have file store enabled in case there is a
situation where I want to start plucking specific files off the wire. Could
that be causing the issue?
On Sat, Jun 9, 2012 at 3:09 AM, Peter Manev <petermanev at gmail.com> wrote:
> Hi,
>
> On Fri, Jun 8, 2012 at 9:56 PM, Brandon Ganem <brandonganem+oisf at gmail.com
> > wrote:
>
>> Changed, seems to have made a huge difference. Thank you!
>>
>> I'm not sure if this is related, but i've got suricata configured to md5
>> all files coming across the wire.
>>
> how do you have that configured?
>
>
>> At start-up it does ~ 7 to 10k a minute for just a few minutes then it
>> tappers off until it gets to almost zero files hashed every minute. Alerts
>> do not seem to be affected.
>>
> do you use md5 sigs?
>
>
>>
>> Sorry for bombarding the list with questions and thank you for the help.
>>
>> On Fri, Jun 8, 2012 at 2:14 PM, Victor Julien <victor at inliniac.net>wrote:
>>
>>> This may be caused by another option that is only mentioned in the
>>> comment block above the stream settings in your yaml:
>>>
>>> # max-sessions: 262144 # 256k concurrent sessions
>>> # prealloc-sessions: 32768 # 32k sessions prealloc'd
>>>
>>> Max sessions puts a limit to the max number of concurrent tcp sessions
>>> tracked.
>>>
>>> Try setting it to something like:
>>>
>>> stream:
>>> max-sessions: 1000000
>>> prealloc-sessions: 500000
>>>
>>> Or something :)
>>>
>>> On 06/08/2012 07:24 PM, Brandon Ganem wrote:
>>> > It looks like *tcp.ssn_memcap_drop | Detect |
>>> > 6019 *is starting to add up now too.
>>> >
>>> > Thanks!
>>> >
>>> > On Fri, Jun 8, 2012 at 1:09 PM, Brandon Ganem
>>> > <brandonganem+oisf at gmail.com <mailto:brandonganem+oisf at gmail.com>>
>>> wrote:
>>> >
>>> > /Up your memcap settings to 4GB each and see if the numbers
>>> improve.
>>> > Both memcap drop stats should be zero when everything's right. /
>>> > Done
>>> >
>>> > /This is odd. Your stream related memcap is 1GB, yet this shows
>>> 6GB in
>>> > use? Which again doesn't seem to match the memory usage you seem
>>> to be
>>> > seeing for the whole process. Smells like a bug to me... /
>>> > /
>>> > /
>>> > Let me know if you want me to compile in some debugging features.
>>> If
>>> > I can provide any additional information let me know.
>>> >
>>> > CPU / MEM: ~50-125% (similar to before) ~2-2.6GB(similar as well.)
>>> > Suricata has only been running for a few minutes, but here is a new
>>> > stats.log:
>>> >
>>> > tcp.sessions | Detect | 464890
>>> > *tcp.ssn_memcap_drop | Detect | 0 (maybe
>>> > better, it may have to run for a while to start adding up though?)*
>>> > tcp.pseudo | Detect | 10567
>>> > tcp.invalid_checksum | Detect | 0
>>> > tcp.no_flow | Detect | 0
>>> > tcp.reused_ssn | Detect | 0
>>> > tcp.memuse | Detect | 141604560
>>> > tcp.syn | Detect | 465555
>>> > tcp.synack | Detect | 233829
>>> > tcp.rst | Detect | 46181
>>> > *tcp.segment_memcap_drop | Detect | 1281114 (I
>>> > don't think this is impoving)*
>>> > *tcp.stream_depth_reached | Detect | 70
>>> > (Looks like this is still going up*
>>> > tcp.reassembly_memuse | Detect | 6442450806
>>> > *(still 6GB not 4GB)*
>>> > *tcp.reassembly_gap | Detect | 44583
>>> > (Still going up)*
>>> > detect.alert | Detect | 25
>>> > flow_mgr.closed_pruned | FlowManagerThread | 150973
>>> > flow_mgr.new_pruned | FlowManagerThread | 207334
>>> > flow_mgr.est_pruned | FlowManagerThread | 0
>>> > flow.memuse | FlowManagerThread | 41834880
>>> > flow.spare | FlowManagerThread | 10742
>>> > flow.emerg_mode_entered | FlowManagerThread | 0
>>> > flow.emerg_mode_over | FlowManagerThread | 0
>>> > decoder.pkts | RxPFR1 | 17310168
>>> > decoder.bytes | RxPFR1 | 7387022602
>>> > decoder.ipv4 | RxPFR1 | 17309598
>>> > decoder.ipv6 | RxPFR1 | 0
>>> > decoder.ethernet | RxPFR1 | 17310168
>>> > decoder.raw | RxPFR1 | 0
>>> > decoder.sll | RxPFR1 | 0
>>> > decoder.tcp | RxPFR1 | 15519823
>>> > decoder.udp | RxPFR1 | 210
>>> > decoder.sctp | RxPFR1 | 0
>>> > decoder.icmpv4 | RxPFR1 | 1323
>>> > decoder.icmpv6 | RxPFR1 | 0
>>> > decoder.ppp | RxPFR1 | 0
>>> > decoder.pppoe | RxPFR1 | 0
>>> > decoder.gre | RxPFR1 | 0
>>> > decoder.vlan | RxPFR1 | 0
>>> > decoder.avg_pkt_size | RxPFR1 | 427
>>> > decoder.max_pkt_size | RxPFR1 | 1516
>>> > defrag.ipv4.fragments | RxPFR1 | 15
>>> > defrag.ipv4.reassembled | RxPFR1 | 5
>>> > defrag.ipv4.timeouts | RxPFR1 | 0
>>> > defrag.ipv6.fragments | RxPFR1 | 0
>>> > defrag.ipv6.reassembled | RxPFR1 | 0
>>> > defrag.ipv6.timeouts | RxPFR1 | 0
>>> >
>>> >
>>> > Here's what has been changed in the cfg:
>>> >
>>> > flow:
>>> > * memcap: 4gb*
>>> > hash-size: 65536
>>> > prealloc: 10000
>>> > emergency-recovery: 30
>>> > prune-flows: 5
>>> >
>>> > stream:
>>> > * memcap: 4gb*
>>> >
>>> > On Fri, Jun 8, 2012 at 12:31 PM, Victor Julien <
>>> victor at inliniac.net
>>> > <mailto:victor at inliniac.net>> wrote:
>>> >
>>> > On 06/08/2012 05:59 PM, Brandon Ganem wrote:
>>> > > tcp.reassembly_memuse | Detect |
>>> 6442450854
>>> >
>>> > This is odd. Your stream related memcap is 1GB, yet this shows
>>> > 6GB in
>>> > use? Which again doesn't seem to match the memory usage you
>>> seem
>>> > to be
>>> > seeing for the whole process. Smells like a bug to me...
>>> >
>>> > --
>>> > ---------------------------------------------
>>> > Victor Julien
>>> > http://www.inliniac.net/
>>> > PGP: http://www.inliniac.net/victorjulien.asc
>>> > ---------------------------------------------
>>> >
>>> > _______________________________________________
>>> > Oisf-users mailing list
>>> > Oisf-users at openinfosecfoundation.org
>>> > <mailto:Oisf-users at openinfosecfoundation.org>
>>> >
>>> http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>>> >
>>> >
>>> >
>>>
>>>
>>> --
>>> ---------------------------------------------
>>> Victor Julien
>>> http://www.inliniac.net/
>>> PGP: http://www.inliniac.net/victorjulien.asc
>>> ---------------------------------------------
>>>
>>>
>>
>> _______________________________________________
>> Oisf-users mailing list
>> Oisf-users at openinfosecfoundation.org
>> http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>>
>>
>
>
> --
> Regards,
> Peter Manev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20120609/c4b65d25/attachment-0002.html>
More information about the Oisf-users
mailing list