[Oisf-users] Couple of questions regarding stats.log
Brandon Ganem
brandonganem+oisf at gmail.com
Tue Jun 12 19:48:12 UTC 2012
Peter,
Looks to be rev 9f7588a (was the latest git at the time, about a week ago?)
output from suricata --build-info:
[12882] 12/6/2012 -- 14:23:26 - (suricata.c:503) <Info> (SCPrintBuildInfo)
-- This is Suricata version 1.3dev (rev 9f7588a)
[12882] 12/6/2012 -- 14:23:26 - (suricata.c:576) <Info> (SCPrintBuildInfo)
-- Features: PCAP_SET_BUFF LIBPCAP_VERSION_MAJOR=1 PF_RING AF_PACKET
HAVE_PACKET_FANOUT LIBCAP_NG LIBNET1.1 HAVE_HTP_URI_NORMALIZE_HOOK
HAVE_HTP_TX_GET_RESPONSE_HEADERS_RAW PCRE_JIT HAVE_NSS
[12882] 12/6/2012 -- 14:23:26 - (suricata.c:590) <Info> (SCPrintBuildInfo)
-- 64-bits, Little-endian architecture
[12882] 12/6/2012 -- 14:23:26 - (suricata.c:592) <Info> (SCPrintBuildInfo)
-- GCC version 4.5.2, C version 199901
[12882] 12/6/2012 -- 14:23:26 - (suricata.c:598) <Info> (SCPrintBuildInfo)
-- __GCC_HAVE_SYNC_COMPARE_AND_SWAP_1
[12882] 12/6/2012 -- 14:23:26 - (suricata.c:601) <Info> (SCPrintBuildInfo)
-- __GCC_HAVE_SYNC_COMPARE_AND_SWAP_2
[12882] 12/6/2012 -- 14:23:26 - (suricata.c:604) <Info> (SCPrintBuildInfo)
-- __GCC_HAVE_SYNC_COMPARE_AND_SWAP_4
[12882] 12/6/2012 -- 14:23:26 - (suricata.c:607) <Info> (SCPrintBuildInfo)
-- __GCC_HAVE_SYNC_COMPARE_AND_SWAP_8
[12882] 12/6/2012 -- 14:23:26 - (suricata.c:610) <Info> (SCPrintBuildInfo)
-- __GCC_HAVE_SYNC_COMPARE_AND_SWAP_16
[12882] 12/6/2012 -- 14:23:26 - (suricata.c:614) <Info> (SCPrintBuildInfo)
-- compiled with -fstack-protector
[12882] 12/6/2012 -- 14:23:26 - (suricata.c:620) <Info> (SCPrintBuildInfo)
-- compiled with _FORTIFY_SOURCE=2
Victor:
Honestly, its hard to say. I'll try to correlated the drops to less than
expected logs.
I let it ran over the weekend. It seems to have an inverse relationship
with the traffic I see, to the number of files logged. Sat and Sunday seem
to log more consistently than weekdays. See graph below.[image: Inline
image 1]
Maybe the box can't handle the traffic? Thanks for all the help.
On Mon, Jun 11, 2012 at 11:55 AM, Victor Julien <victor at inliniac.net> wrote:
> On 06/08/2012 09:54 PM, Brandon Ganem wrote:
> > Changed, seems to have made a huge difference. Thank you!
> >
> > I'm not sure if this is related, but i've got suricata configured to md5
> > all files coming across the wire. At start-up it does ~ 7 to 10k a
> > minute for just a few minutes then it tappers off until it gets to
> > almost zero files hashed every minute. Alerts do not seem to be affected.
>
> Does there appear to be a correlation with the _drop counters in your
> stats.log when that happens?
>
> Thats the only thing I can think of (other than bugs).
>
> Cheers,
> Victor
>
>
> > Sorry for bombarding the list with questions and thank you for the help
> > so far.
> >
> > On Fri, Jun 8, 2012 at 2:14 PM, Victor Julien <victor at inliniac.net
> > <mailto:victor at inliniac.net>> wrote:
> >
> > This may be caused by another option that is only mentioned in the
> > comment block above the stream settings in your yaml:
> >
> > # max-sessions: 262144 # 256k concurrent sessions
> > # prealloc-sessions: 32768 # 32k sessions prealloc'd
> >
> > Max sessions puts a limit to the max number of concurrent tcp
> sessions
> > tracked.
> >
> > Try setting it to something like:
> >
> > stream:
> > max-sessions: 1000000
> > prealloc-sessions: 500000
> >
> > Or something :)
> >
> > On 06/08/2012 07:24 PM, Brandon Ganem wrote:
> > > It looks like *tcp.ssn_memcap_drop | Detect
> |
> > > 6019 *is starting to add up now too.
> > >
> > > Thanks!
> > >
> > > On Fri, Jun 8, 2012 at 1:09 PM, Brandon Ganem
> > > <brandonganem+oisf at gmail.com
> > <mailto:brandonganem%2Boisf at gmail.com>
> > <mailto:brandonganem+oisf at gmail.com
> > <mailto:brandonganem%2Boisf at gmail.com>>> wrote:
> > >
> > > /Up your memcap settings to 4GB each and see if the numbers
> > improve.
> > > Both memcap drop stats should be zero when everything's right.
> /
> > > Done
> > >
> > > /This is odd. Your stream related memcap is 1GB, yet this
> > shows 6GB in
> > > use? Which again doesn't seem to match the memory usage you
> > seem to be
> > > seeing for the whole process. Smells like a bug to me... /
> > > /
> > > /
> > > Let me know if you want me to compile in some debugging
> > features. If
> > > I can provide any additional information let me know.
> > >
> > > CPU / MEM: ~50-125% (similar to before) ~2-2.6GB(similar as
> well.)
> > > Suricata has only been running for a few minutes, but here is
> > a new
> > > stats.log:
> > >
> > > tcp.sessions | Detect | 464890
> > > *tcp.ssn_memcap_drop | Detect | 0
> (maybe
> > > better, it may have to run for a while to start adding up
> > though?)*
> > > tcp.pseudo | Detect | 10567
> > > tcp.invalid_checksum | Detect | 0
> > > tcp.no_flow | Detect | 0
> > > tcp.reused_ssn | Detect | 0
> > > tcp.memuse | Detect |
> 141604560
> > > tcp.syn | Detect | 465555
> > > tcp.synack | Detect | 233829
> > > tcp.rst | Detect | 46181
> > > *tcp.segment_memcap_drop | Detect |
> > 1281114 (I
> > > don't think this is impoving)*
> > > *tcp.stream_depth_reached | Detect | 70
> > > (Looks like this is still going up*
> > > tcp.reassembly_memuse | Detect |
> 6442450806
> > > *(still 6GB not 4GB)*
> > > *tcp.reassembly_gap | Detect | 44583
> > > (Still going up)*
> > > detect.alert | Detect | 25
> > > flow_mgr.closed_pruned | FlowManagerThread | 150973
> > > flow_mgr.new_pruned | FlowManagerThread | 207334
> > > flow_mgr.est_pruned | FlowManagerThread | 0
> > > flow.memuse | FlowManagerThread |
> 41834880
> > > flow.spare | FlowManagerThread | 10742
> > > flow.emerg_mode_entered | FlowManagerThread | 0
> > > flow.emerg_mode_over | FlowManagerThread | 0
> > > decoder.pkts | RxPFR1 |
> 17310168
> > > decoder.bytes | RxPFR1 |
> 7387022602
> > > decoder.ipv4 | RxPFR1 |
> 17309598
> > > decoder.ipv6 | RxPFR1 | 0
> > > decoder.ethernet | RxPFR1 |
> 17310168
> > > decoder.raw | RxPFR1 | 0
> > > decoder.sll | RxPFR1 | 0
> > > decoder.tcp | RxPFR1 |
> 15519823
> > > decoder.udp | RxPFR1 | 210
> > > decoder.sctp | RxPFR1 | 0
> > > decoder.icmpv4 | RxPFR1 | 1323
> > > decoder.icmpv6 | RxPFR1 | 0
> > > decoder.ppp | RxPFR1 | 0
> > > decoder.pppoe | RxPFR1 | 0
> > > decoder.gre | RxPFR1 | 0
> > > decoder.vlan | RxPFR1 | 0
> > > decoder.avg_pkt_size | RxPFR1 | 427
> > > decoder.max_pkt_size | RxPFR1 | 1516
> > > defrag.ipv4.fragments | RxPFR1 | 15
> > > defrag.ipv4.reassembled | RxPFR1 | 5
> > > defrag.ipv4.timeouts | RxPFR1 | 0
> > > defrag.ipv6.fragments | RxPFR1 | 0
> > > defrag.ipv6.reassembled | RxPFR1 | 0
> > > defrag.ipv6.timeouts | RxPFR1 | 0
> > >
> > >
> > > Here's what has been changed in the cfg:
> > >
> > > flow:
> > > * memcap: 4gb*
> > > hash-size: 65536
> > > prealloc: 10000
> > > emergency-recovery: 30
> > > prune-flows: 5
> > >
> > > stream:
> > > * memcap: 4gb*
> > >
> > > On Fri, Jun 8, 2012 at 12:31 PM, Victor Julien
> > <victor at inliniac.net <mailto:victor at inliniac.net>
> > > <mailto:victor at inliniac.net <mailto:victor at inliniac.net>>>
> wrote:
> > >
> > > On 06/08/2012 05:59 PM, Brandon Ganem wrote:
> > > > tcp.reassembly_memuse | Detect |
> > 6442450854
> > >
> > > This is odd. Your stream related memcap is 1GB, yet this
> shows
> > > 6GB in
> > > use? Which again doesn't seem to match the memory usage
> > you seem
> > > to be
> > > seeing for the whole process. Smells like a bug to me...
> > >
> > > --
> > > ---------------------------------------------
> > > Victor Julien
> > > http://www.inliniac.net/
> > > PGP: http://www.inliniac.net/victorjulien.asc
> > > ---------------------------------------------
> > >
> > > _______________________________________________
> > > Oisf-users mailing list
> > > Oisf-users at openinfosecfoundation.org
> > <mailto:Oisf-users at openinfosecfoundation.org>
> > > <mailto:Oisf-users at openinfosecfoundation.org
> > <mailto:Oisf-users at openinfosecfoundation.org>>
> > >
> > http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
> > >
> > >
> > >
> >
> >
> > --
> > ---------------------------------------------
> > Victor Julien
> > http://www.inliniac.net/
> > PGP: http://www.inliniac.net/victorjulien.asc
> > ---------------------------------------------
> >
> >
>
>
> --
> ---------------------------------------------
> Victor Julien
> http://www.inliniac.net/
> PGP: http://www.inliniac.net/victorjulien.asc
> ---------------------------------------------
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20120612/0a745867/attachment-0002.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 10025 bytes
Desc: not available
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20120612/0a745867/attachment-0002.png>
More information about the Oisf-users
mailing list