[Oisf-users] Suricata 4.0.3 with Napatech problems
Steve Castellarin
steve.castellarin at gmail.com
Mon Feb 12 15:20:27 UTC 2018
Peter Manev has been helping off-list with this issue. And as of now it
looks like my Suricata 4.0.3 instance is stable and running well. Here is
a summary of changes we've made to the YAML file. Thank you again Peter
for all your help with this.
1) Change in "libhtp" section, from:
default-config:
request-body-limit: 12mb
response-body-limit: 12mb
TO:
request-body-limit: 2mb
response-body-limit: 2mb
2) Changed "max-pending-packets: 10000" to "max-pending-packets: 65000"
3) Changed "default-packet-size: 9018" to "default-packet-size: 1572"
4) Change in "flow" section, from:
managers: 10
#recyclers: 1 # default to one flow recycler thread
TO:
managers: 2
recyclers: 2 # default to one flow recycler thread
5) Changed "flow-timeouts" section from:
default:
new: 3
established: 300
closed: 0
emergency-new: 10
emergency-established: 10
emergency-closed: 0
tcp:
new: 6
established: 100
closed: 12
emergency-new: 1
emergency-established: 5
emergency-closed: 2
udp:
new: 3
established: 30
emergency-new: 3
emergency-established: 10
icmp:
new: 3
established: 30
emergency-new: 1
emergency-established: 10
TO:
default:
new: 2
established: 30
closed: 0
bypassed: 10
emergency-new: 1
emergency-established: 10
emergency-closed: 0
emergency-bypassed: 5
tcp:
new: 2
established: 60
closed: 2
bypassed: 20
emergency-new: 1
emergency-established: 10
emergency-closed: 0
emergency-bypassed: 5
udp:
new: 2
established: 15
bypassed: 10
emergency-new: 1
emergency-established: 10
emergency-bypassed: 5
icmp:
new: 3
established: 30
bypassed: 10
emergency-new: 1
emergency-established: 10
emergency-bypassed: 5
6) Changed "stream" section from:
memcap: 12gb
checksum-validation: no
prealloc-session: 100000
inline: no
bypass: yes
reassembly:
memcap: 20gb
depth: 12mb
toserver-chunk-size: 2560
toclient-chunk-size: 2560
randomize-chunk-size: yes
chunk-prealloc: 303360
TO:
memcap: 12gb
checksum-validation: no
prealloc-session: 100000
inline: auto
reassembly:
memcap: 20gb
depth: 2mb
toserver-chunk-size: 2560
toclient-chunk-size: 2560
randomize-chunk-size: yes
#randomize-chunk-range: 10
#raw: yes
segment-prealloc: 40000
7) Changed "detect" section from:
profile: high
custom-values:
toclient-sp-groups: 200
toclient-dp-groups: 300
toserver-src-groups: 200
toserver-dst-groups: 400
toserver-sp-groups: 200
toserver-dp-groups: 250
sgh-mpm-context: auto
inspection-recursion-limit: 3000
prefilter:
default: mpm
TO:
profile: high
custom-values:
toclient-groups: 3
toserver-groups: 25
sgh-mpm-context: auto
inspection-recursion-limit: 3000
prefilter:
default: auto
grouping:
tcp-whitelist: 53, 80, 139, 443, 445, 1433, 3306, 3389, 6666, 6667, 8080
udp-whitelist: 53, 135, 5060
profiling:
grouping:
dump-to-disk: false
include-rules: false # very verbose
include-mpm-stats: false
8) Changed "mpm-algo" from 'ac-ks' to 'auto'
9) Changed "spm-algo" from 'bm' to 'auto'
On Wed, Jan 31, 2018 at 2:25 AM, Peter Manev <petermanev at gmail.com> wrote:
> On Tue, Jan 30, 2018 at 10:07 PM, Steve Castellarin
> <steve.castellarin at gmail.com> wrote:
> > Oh sorry. In one instance it took 20-25 minutes. Another took an
> hour. In
> > both cases the bandwidth utilization was under 1Gbps.
> >
>
> In this case I would suggest to try to narrow it down if possible (and
> see if that is the real cause actually) - to a rule file/rule
> So maybe if you can take the config that took 1 hr and start from there.
>
>
> > On Tue, Jan 30, 2018 at 4:06 PM, Peter Manev <petermanev at gmail.com>
> wrote:
> >>
> >> On Tue, Jan 30, 2018 at 9:46 PM, Steve Castellarin
> >> <steve.castellarin at gmail.com> wrote:
> >> > It will stay 100% for minutes, etc - until I kill Suricata. The same
> >> > goes
> >> > with the associated host buffer - it will continually drop packets.
> If
> >> > I do
> >> > not stop Suricata, eventually a second CPU/host buffer pair will hit
> >> > that
> >> > 100% mark, and so on. I've had instances where I've let it go to 8
> or 9
> >> > CPU/buffers at 100% before I killed it - hoping that the original
> CPU(s)
> >> > would recover but they don't.
> >> >
> >>
> >> I meant something else.
> >> In previous runs you mentioned that one or more buffers start hitting
> >> 100% right after 15 min.
> >> In the two previous test runs - that you tried with 1/2 the ruleset -
> >> how long did it take before you started seeing any buffer hitting 100%
> >> ?
> >>
> >> > On Tue, Jan 30, 2018 at 3:34 PM, Peter Manev <petermanev at gmail.com>
> >> > wrote:
> >> >>
> >> >> On Tue, Jan 30, 2018 at 8:49 PM, Steve Castellarin
> >> >> <steve.castellarin at gmail.com> wrote:
> >> >> > Hey Peter,
> >> >> >
> >> >> > Unfortunately I continue to have the same issues with a buffer
> >> >> > overflowing
> >> >> > and a CPU staying at 100%, repeating over multiple buffers and CPUs
> >> >> > until I
> >> >> > kill the Suricata process.
> >> >>
> >> >> For what period of time o you get to the 100% ?
> >> >>
> >> >> >
> >> >> > On Thu, Jan 25, 2018 at 9:14 AM, Steve Castellarin
> >> >> > <steve.castellarin at gmail.com> wrote:
> >> >> >>
> >> >> >> OK I'll create a separate bug tracker on Redmine.
> >> >> >>
> >> >> >> I was able to run 4.0.3 with a smaller ruleset (13,971 versus
> >> >> >> 29,110)
> >> >> >> for
> >> >> >> 90 minutes yesterday, without issue, before I had to leave. I'm
> >> >> >> getting
> >> >> >> ready to run 4.0.3 again to see how it runs and for how long.
> I'll
> >> >> >> update
> >> >> >> with results.
> >> >> >>
> >> >> >> On Thu, Jan 25, 2018 at 9:00 AM, Peter Manev <
> petermanev at gmail.com>
> >> >> >> wrote:
> >> >> >>>
> >> >> >>> On Wed, Jan 24, 2018 at 6:27 PM, Steve Castellarin
> >> >> >>> <steve.castellarin at gmail.com> wrote:
> >> >> >>> > If a bug/feature report is needed - would that fall into Bug
> >> >> >>> > #2423
> >> >> >>> > that
> >> >> >>> > I
> >> >> >>> > opened on Redmine last week?
> >> >> >>> >
> >> >> >>>
> >> >> >>> Separate is probably better.
> >> >> >>>
> >> >> >>> > As for splitting the rules, I'll test that out and let you know
> >> >> >>> > what
> >> >> >>> > happens.
> >> >> >>> >
> >> >> >>>
> >> >> >>>
> >> >> >>> --
> >> >> >>> Regards,
> >> >> >>> Peter Manev
> >> >> >>
> >> >> >>
> >> >> >
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >> Regards,
> >> >> Peter Manev
> >> >
> >> >
> >>
> >>
> >>
> >> --
> >> Regards,
> >> Peter Manev
> >
> >
>
>
>
> --
> Regards,
> Peter Manev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20180212/46b6a854/attachment-0001.html>
More information about the Oisf-users
mailing list