<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">Off course!<br>
      Here it is:
      <a class="moz-txt-link-freetext" href="https://dl.dropbox.com/u/9555438/_usr_bin_suricata.0.crash.gz">https://dl.dropbox.com/u/9555438/_usr_bin_suricata.0.crash.gz</a><br>
      <br>
      <br>
      On 12/04/2012 09:49 AM, Peter Manev wrote:<br>
    </div>
    <blockquote cite="mid:CAMhe82+T0pS+kDbh-mYKR46oCPu=mh5bUuwc2BUOaTMSmHSjww@mail.gmail.com" type="cite">Hi,<br>
      <br>
      I think a core dump would be very useful - for the dev folks.<br>
      Do you think this is possible?<br>
      <br>
      <br>
      <br>
      <div class="gmail_quote">On Tue, Dec 4, 2012 at 1:01 PM, Fernando
        Sclavo <span dir="ltr"><<a href="mailto:fsclavo@gmail.com" target="_blank">fsclavo@gmail.com</a>></span>
        wrote:<br>
        <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
          <div text="#000000" bgcolor="#FFFFFF"> Hi, with the
            suggestions you gave me, and with some more reading, I
            almost keep the IDS without drops, but, unfortunatelly
            Suricata crash, always with the same error:<br>
            <br>
            [45249.968058] AFPacketeth58[2096] general protection
            ip:477861 sp:7fbe28cc5bb0 error:0 in suricata[400000+1a5000]<br>
            <br>
            Please let me know how can I help to debug this problem.<br>
            Thanks!
            <div>
              <div class="h5"><br>
                <br>
                <br>
                On 12/01/2012 10:03 AM, Peter Manev wrote:<br>
                <span style="white-space:pre-wrap">> Hi ,<br>
                  ><br>
                  > Martin is very right about the flow-timeouts -
                  very important, not to forget to adjust those.<br>
                  > 300 sec is 5 min...on a busy network .... -<br>
                  > tcp:<br>
                  > established: 3600 - (default)<br>
                  > 1 hr can have some serious impact :)<br>
                  ><br>
                  > It is funny you mention about the drops.. I just
                  had a quick chat with Victor about drops in general
                  just a few days ago.<br>
                  > Here is some of our values/results on one of our
                  test boxes (9.5Gb/s traffic):<br>
                  ><br>
                  > YAML:<br>
                  > flow-timeouts:<br>
                  ><br>
                  > default:<br>
                  > new: 5 #30<br>
                  > established: 10# 300<br>
                  > closed: 0<br>
                  > emergency-new: 1 #10<br>
                  > emergency-established: 2 #100<br>
                  > emergency-closed: 0<br>
                  > tcp:<br>
                  > new: 5 #60<br>
                  > established: 300 # 3600<br>
                  > closed: 10 #30<br>
                  > emergency-new: 1 # 10<br>
                  > emergency-established: 5 # 300<br>
                  > emergency-closed: 20 #20<br>
                  > udp:<br>
                  > new: 5 #30<br>
                  > established: 5 # 300<br>
                  > emergency-new: 5 #10<br>
                  > emergency-established: 5 # 100<br>
                  > icmp:<br>
                  > new: 5 #30<br>
                  > established: 5 % 300<br>
                  > emergency-new: 5 #10<br>
                  > emergency-established: 5 # 100<br>
                  ><br>
                  > ......<br>
                  > stream:<br>
                  > memcap: 16gb<br>
                  > max-sessions: 20000000<br>
                  > prealloc-sessions: 10000000<br>
                  > checksum-validation: no # reject wrong csums<br>
                  > #checksum-validation: yes # reject wrong csums<br>
                  > inline: no # no inline mode<br>
                  > reassembly:<br>
                  > memcap: 12gb<br>
                  > #memcap: 8gb<br>
                  > depth: 12mb # reassemble 1mb into a stream<br>
                  > toserver-chunk-size: 2560<br>
                  > toclient-chunk-size: 2560<br>
                  ><br>
                  > # Host table:<br>
                  > #<br>
                  > # Host table is used by tagging and per host
                  thresholding subsystems.<br>
                  > #<br>
                  > host:<br>
                  > hash-size: 4096<br>
                  > prealloc: 1000<br>
                  > memcap: 16777216<br>
                  ><br>
                  > ......<br>
                  > # Defrag settings:<br>
                  ><br>
                  > defrag:<br>
                  > #trackers: 262144 # number of defragmented flows
                  to follow<br>
                  > #max-frags: 262144 #number of fragments per-flow<br>
                  > trackers: 65535<br>
                  > max-frags: 65535 # number of fragments per-flow<br>
                  > prealloc: yes<br>
                  > timeout: 10<br>
                  > <br>
                  ><br>
                  ><br>
                  > Al this is using af_packet 16 threads , on a
                  16CPU(with Hyperthrd) box 32 GB RAM, with some special
                  intel 10G NIC tuning, ubuntu LTS 12.04, running latest
                  git with 7K EmThr rules.<br>
                  > Some more info:<br>
                  ><br>
                  ><br>
                  ><br>
                  > pevman@suricata:~$ sudo grep -n "drop"
                  /var/data/regit/log/suricata/stats.log | tail -48<br>
                  > 2504179:capture.kernel_drops | AFPacketeth31 | 0<br>
                  > 2504209:tcp.ssn_memcap_drop | AFPacketeth31 | 0<br>
                  > 2504218:tcp.segment_memcap_drop | AFPacketeth31 |
                  0<br>
                  > 2504224:capture.kernel_drops | AFPacketeth32 | 0<br>
                  > 2504254:tcp.ssn_memcap_drop | AFPacketeth32 | 0<br>
                  > 2504263:tcp.segment_memcap_drop | AFPacketeth32 |
                  0<br>
                  > 2504269:capture.kernel_drops | AFPacketeth33 | 0<br>
                  > 2504299:tcp.ssn_memcap_drop | AFPacketeth33 | 0<br>
                  > 2504308:tcp.segment_memcap_drop | AFPacketeth33 |
                  0<br>
                  > 2504314:capture.kernel_drops | AFPacketeth34 | 0<br>
                  > 2504344:tcp.ssn_memcap_drop | AFPacketeth34 | 0<br>
                  > 2504353:tcp.segment_memcap_drop | AFPacketeth34 |
                  0<br>
                  > 2504359:capture.kernel_drops | AFPacketeth35 | 0<br>
                  > 2504389:tcp.ssn_memcap_drop | AFPacketeth35 | 0<br>
                  > 2504398:tcp.segment_memcap_drop | AFPacketeth35 |
                  0<br>
                  > 2504404:capture.kernel_drops | AFPacketeth36 | 0<br>
                  > 2504434:tcp.ssn_memcap_drop | AFPacketeth36 | 0<br>
                  > 2504443:tcp.segment_memcap_drop | AFPacketeth36 |
                  0<br>
                  > 2504449:capture.kernel_drops | AFPacketeth37 | 0<br>
                  > 2504479:tcp.ssn_memcap_drop | AFPacketeth37 | 0<br>
                  > 2504488:tcp.segment_memcap_drop | AFPacketeth37 |
                  0<br>
                  > 2504494:capture.kernel_drops | AFPacketeth38 | 0<br>
                  > 2504524:tcp.ssn_memcap_drop | AFPacketeth38 | 0<br>
                  > 2504533:tcp.segment_memcap_drop | AFPacketeth38 |
                  0<br>
                  > 2504539:capture.kernel_drops | AFPacketeth39 | 0<br>
                  > 2504569:tcp.ssn_memcap_drop | AFPacketeth39 | 0<br>
                  > 2504578:tcp.segment_memcap_drop | AFPacketeth39 |
                  0<br>
                  > 2504584:capture.kernel_drops | AFPacketeth310 | 0<br>
                  > 2504614:tcp.ssn_memcap_drop | AFPacketeth310 | 0<br>
                  > 2504623:tcp.segment_memcap_drop | AFPacketeth310
                  | 0<br>
                  > 2504629:capture.kernel_drops | AFPacketeth311 | 0<br>
                  > 2504659:tcp.ssn_memcap_drop | AFPacketeth311 | 0<br>
                  > 2504668:tcp.segment_memcap_drop | AFPacketeth311
                  | 0<br>
                  > 2504674:capture.kernel_drops | AFPacketeth312 | 0<br>
                  > 2504704:tcp.ssn_memcap_drop | AFPacketeth312 | 0<br>
                  > 2504713:tcp.segment_memcap_drop | AFPacketeth312
                  | 0<br>
                  > 2504719:capture.kernel_drops | AFPacketeth313 | 0<br>
                  > 2504749:tcp.ssn_memcap_drop | AFPacketeth313 | 0<br>
                  > 2504758:tcp.segment_memcap_drop | AFPacketeth313
                  | 0<br>
                  > 2504764:capture.kernel_drops | AFPacketeth314 | 0<br>
                  > 2504794:tcp.ssn_memcap_drop | AFPacketeth314 | 0<br>
                  > 2504803:tcp.segment_memcap_drop | AFPacketeth314
                  | 0<br>
                  > 2504809:capture.kernel_drops | AFPacketeth315 | 0<br>
                  > 2504839:tcp.ssn_memcap_drop | AFPacketeth315 | 0<br>
                  > 2504848:tcp.segment_memcap_drop | AFPacketeth315
                  | 0<br>
                  > 2504854:capture.kernel_drops | AFPacketeth316 | 0<br>
                  > 2504884:tcp.ssn_memcap_drop | AFPacketeth316 | 0<br>
                  > 2504893:tcp.segment_memcap_drop | AFPacketeth316
                  | 0<br>
                  ><br>
                  > *pevman@suricata:~$ suricata --build-info*<br>
                  > [10384] 1/12/2012 -- 14:28:44 - (suricata.c:560)
                  <Info> (SCPrintBuildInfo) -- This is Suricata
                  version 1.4dev (rev 005f7a2)<br>
                  > [10384] 1/12/2012 -- 14:28:44 - (suricata.c:633)
                  <Info> (SCPrintBuildInfo) -- Features:
                  PCAP_SET_BUFF LIBPCAP_VERSION_MAJOR=1 PF_RING
                  AF_PACKET HAVE_PACKET_FANOUT LIBCAP_NG LIBNET1.1
                  HAVE_HTP_URI_NORMALIZE_HOOK
                  HAVE_HTP_TX_GET_RESPONSE_HEADERS_RAW HAVE_NSS
                  PROFILING<br>
                  > [10384] 1/12/2012 -- 14:28:44 - (suricata.c:647)
                  <Info> (SCPrintBuildInfo) -- 64-bits,
                  Little-endian architecture<br>
                  > [10384] 1/12/2012 -- 14:28:44 - (suricata.c:649)
                  <Info> (SCPrintBuildInfo) -- GCC version 4.6.3,
                  C version 199901<br>
                  > [10384] 1/12/2012 -- 14:28:44 - (suricata.c:655)
                  <Info> (SCPrintBuildInfo) --
                  __GCC_HAVE_SYNC_COMPARE_AND_SWAP_1<br>
                  > [10384] 1/12/2012 -- 14:28:44 - (suricata.c:658)
                  <Info> (SCPrintBuildInfo) --
                  __GCC_HAVE_SYNC_COMPARE_AND_SWAP_2<br>
                  > [10384] 1/12/2012 -- 14:28:44 - (suricata.c:661)
                  <Info> (SCPrintBuildInfo) --
                  __GCC_HAVE_SYNC_COMPARE_AND_SWAP_4<br>
                  > [10384] 1/12/2012 -- 14:28:44 - (suricata.c:664)
                  <Info> (SCPrintBuildInfo) --
                  __GCC_HAVE_SYNC_COMPARE_AND_SWAP_8<br>
                  > [10384] 1/12/2012 -- 14:28:44 - (suricata.c:667)
                  <Info> (SCPrintBuildInfo) --
                  __GCC_HAVE_SYNC_COMPARE_AND_SWAP_16<br>
                  > [10384] 1/12/2012 -- 14:28:44 - (suricata.c:671)
                  <Info> (SCPrintBuildInfo) -- compiled with
                  -fstack-protector<br>
                  > [10384] 1/12/2012 -- 14:28:44 - (suricata.c:677)
                  <Info> (SCPrintBuildInfo) -- compiled with
                  _FORTIFY_SOURCE=2<br>
                  > [10384] 1/12/2012 -- 14:28:44 - (suricata.c:680)
                  <Info> (SCPrintBuildInfo) -- compiled with
                  libhtp 0.2.11, linked against 0.2.11<br>
                  ><br>
                  > pevman@suricata:~$ sudo grep -n "uptime"
                  /var/data/regit/log/suricata/stats.log | tail -4<br>
                  > 2503442:Date: 12/1/2012 -- 14:27:56 (uptime: 0d,
                  18h 07m 04s)<br>
                  > 2504174:Date: 12/1/2012 -- 14:28:15 (uptime: 0d,
                  18h 07m 23s)<br>
                  > 2504906:Date: 12/1/2012 -- 14:28:34 (uptime: 0d,
                  18h 07m 42s)<br>
                  > 2505638:Date: 12/1/2012 -- 14:28:53 (uptime: 0d,
                  18h 08m 01s)<br>
                  ><br>
                  > pevman@suricata:~$ sudo tcpstat -i eth3<br>
                  > Time:1354365172 n=6106758 avg=984.85
                  stddev=663.77 bps=9622763462.40<br>
                  > Time:1354365177 n=6126927 avg=981.51
                  stddev=663.29 bps=9621826076.80<br>
                  > Time:1354365182 n=6110921 avg=984.19
                  stddev=662.02 bps=9622922160.00<br>
                  > Time:1354365187 n=6126978 avg=981.50
                  stddev=662.38 bps=9621846648.00<br>
                  > Time:1354365192 n=6109322 avg=984.46
                  stddev=661.25 bps=9623061092.80<br>
                  > Time:1354365197 n=6146841 avg=978.24
                  stddev=662.73 bps=9620970840.00<br>
                  > ^CTime:1354365202 n=112243 avg=982.41
                  stddev=663.97 bps=176430308.80<br>
                  ><br>
                  > pevman@suricata:~$ uname -a<br>
                  > Linux suricata 3.2.0-30-generic #48-Ubuntu SMP
                  Fri Aug 24 16:52:48 UTC 2012 x86_64 x86_64 x86_64
                  GNU/Linux<br>
                  > pevman@suricata:~$<br>
                  ><br>
                  ><br>
                  ><br>
                  ><br>
                  ><br>
                  > hope it helps.<br>
                  ><br>
                  > thanks<br>
                  ><br>
                  > On Sat, Dec 1, 2012 at 3:54 AM, Martin Holste
                  <<a href="mailto:mcholste@gmail.com" target="_blank">mcholste@gmail.com</a>
                  <a href="mailto:mcholste@gmail.com" target="_blank"><mailto:mcholste@gmail.com></a>>

                  wrote:<br>
                  ><br>
                  > Adjust your default timeouts much lower so that
                  streams are taken out of the connection pool more
                  quickly.<br>
                  ><br>
                  > This config is aggressive, but I think you'll
                  find it does the trick. If it doesn't work, I'd like
                  to know:<br>
                  ><br>
                  > flow-timeouts:<br>
                  ><br>
                  > default:<br>
                  > new: 1 # 30<br>
                  > established: 10 #300<br>
                  > closed: 0<br>
                  > emergency_new: 1 #10<br>
                  > emergency_established: 1 #100<br>
                  > emergency_closed: 0<br>
                  > tcp:<br>
                  > new: 1 #60<br>
                  > established: 10 #3600<br>
                  > closed: 0 #120<br>
                  > emergency_new: 1 #10<br>
                  > emergency_established: 5 #1 #300<br>
                  > emergency_closed: 20<br>
                  > udp:<br>
                  > new: 1 #30<br>
                  > established: 1 #300<br>
                  > emergency_new: 1 #10<br>
                  > emergency_established: 1 #100<br>
                  > icmp:<br>
                  > new: 1 #30<br>
                  > established: 1 #300<br>
                  > emergency_new: 1 #10<br>
                  > emergency_established: 1 #100<br>
                  ><br>
                  ><br>
                  ><br>
                  ><br>
                  > On Fri, Nov 30, 2012 at 4:15 PM, Dave Remien <<a href="mailto:dave.remien@gmail.com" target="_blank">dave.remien@gmail.com</a>
                  <a href="mailto:dave.remien@gmail.com" target="_blank"><mailto:dave.remien@gmail.com></a>>

                  wrote:<br>
                  ><br>
                  > Fernando,<br>
                  ><br>
                  > If I'm reading your config file right, you're
                  asking for 8.3 million sessions of 512KB each? I think
                  that works out to 4.3TB of RAM; rather more than the
                  64GB memcap.<br>
                  ><br>
                  > Cheers,<br>
                  ><br>
                  > Dave<br>
                  ><br>
                  ><br>
                  > On Fri, Nov 30, 2012 at 10:24 AM, Fernando Sclavo
                  <<a href="mailto:fsclavo@gmail.com" target="_blank">fsclavo@gmail.com</a>
                  <a href="mailto:fsclavo@gmail.com" target="_blank"><mailto:fsclavo@gmail.com></a>>
                  wrote:<br>
                  ></span><br>
                <blockquote type="cite">Hello all!<br>
                  I'm installing an IDS on our company, monitoring two
                  core switches with<br>
                  a sustained traffic of about 2gbps each. The server is
                  a Dell R715, 32<br>
                  cores, 192Gb RAM with two Intel X520 nics. Suricata
                  version is 1.4b3.<br>
                  The problem we are facing, is with
                  tcp.segment_memcap_drop increasing<br>
                  continuosly once time tcp.reassembly_memuse reaches
                  their max size (64gb!!)<br>
                  The related suricata.yaml stanza is:<br>
                  <br>
                  stream:<br>
                    memcap: 24gb<br>
                    checksum-validation: no      # reject wrong csums<br>
                    inline: no                  # auto will use inline
                  mode in IPS mode,<br>
                  yes or no set it statically<br>
                    max-sessions: 8388608<br>
                    prealloc-sessions: 8388608<br>
                    reassembly:<br>
                      memcap: 64gb<br>
                      depth: 512kb                  # reassemble 1mb
                  into a stream<br>
                      toserver-chunk-size: 2560<br>
                      toclient-chunk-size: 2560<br>
                  <br>
                  Thanks in advance!<br>
                </blockquote>
                <span style="white-space:pre-wrap">>
                  _______________________________________________<br>
                  > Suricata IDS Users mailing list: <a href="mailto:oisf-users@openinfosecfoundation.org" target="_blank">oisf-users@openinfosecfoundation.org</a>
                  <a href="mailto:oisf-users@openinfosecfoundation.org" target="_blank"><mailto:oisf-users@openinfosecfoundation.org></a><br>
                  > Site: <a href="http://suricata-ids.org" target="_blank">http://suricata-ids.org</a>
                  | Support: <a href="http://suricata-ids.org/support/" target="_blank">http://suricata-ids.org/support/</a><br>
                  > List: <a href="https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users" target="_blank">https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>
                  > OISF: <a href="http://www.openinfosecfoundation.org/" target="_blank">http://www.openinfosecfoundation.org/</a><br>
                  ><br>
                  ><br>
                  ><br>
                  ><br>
                  > -- <br>
                  > ".... We are such stuff<br>
                  > As dreams are made on; and our little life<br>
                  > Is rounded with a sleep."<br>
                  > -- Shakespeare, The Tempest - Act 4<br>
                  ><br>
                  ><br>
                  > _______________________________________________<br>
                  > Suricata IDS Users mailing list: <a href="mailto:oisf-users@openinfosecfoundation.org" target="_blank">oisf-users@openinfosecfoundation.org</a>
                  <a href="mailto:oisf-users@openinfosecfoundation.org" target="_blank"><mailto:oisf-users@openinfosecfoundation.org></a><br>
                  > Site: <a href="http://suricata-ids.org" target="_blank">http://suricata-ids.org</a>
                  | Support: <a href="http://suricata-ids.org/support/" target="_blank">http://suricata-ids.org/support/</a><br>
                  > List: <a href="https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users" target="_blank">https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>
                  > OISF: <a href="http://www.openinfosecfoundation.org/" target="_blank">http://www.openinfosecfoundation.org/</a><br>
                  ><br>
                  ><br>
                  ><br>
                  > _______________________________________________<br>
                  > Suricata IDS Users mailing list: <a href="mailto:oisf-users@openinfosecfoundation.org" target="_blank">oisf-users@openinfosecfoundation.org</a>
                  <a href="mailto:oisf-users@openinfosecfoundation.org" target="_blank"><mailto:oisf-users@openinfosecfoundation.org></a><br>
                  > Site: <a href="http://suricata-ids.org" target="_blank">http://suricata-ids.org</a>
                  | Support: <a href="http://suricata-ids.org/support/" target="_blank">http://suricata-ids.org/support/</a><br>
                  > List: <a href="https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users" target="_blank">https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>
                  > OISF: <a href="http://www.openinfosecfoundation.org/" target="_blank">http://www.openinfosecfoundation.org/</a><br>
                  ><br>
                  ><br>
                  ><br>
                  ><br>
                  > -- <br>
                  > Regards,<br>
                  > Peter Manev<br>
                  ></span><br>
                <br>
                <br>
              </div>
            </div>
          </div>
        </blockquote>
      </div>
      <br>
      <br clear="all">
      <br>
      -- <br>
      <div>Regards,</div>
      <div>Peter Manev</div>
      <br>
    </blockquote>
    <br>
  </body>
</html>