<div dir="ltr">Hey Peter,<div><br></div><div>Here are some snippets from my suricata.yaml. Let me know if you want more:</div><div>max-pending-packets: 10000<br></div><div>runmode: workers<br></div><div>autofp-scheduler: active-packets<br></div><div>default-packet-size: 9018 # Same as defined in Napatech's NTSYSTEM.INI file<br></div><div><div>defrag:</div><div> hash-size: 65536</div><div> trackers: 65535</div><div> max-frags: 65535</div><div> prealloc: yes</div><div> timeout: 10</div></div><div><div>flow:</div><div> memcap: 1gb</div><div> hash-size: 1048576</div><div> prealloc: 1048576</div><div> prune-flows: 50000</div><div> emergency-recovery: 30</div><div> managers: 10</div><div><br></div><div>stream:</div><div> memcap: 12gb</div><div> checksum-validation: no</div><div> prealloc-session: 200000</div><div> inline: no</div><div> bypass: yes</div><div> reassembly:</div><div> memcap: 24gb</div><div> depth: 1mb</div><div><br></div><div>detect:</div><div> - profile: custom</div><div> - custom-values:</div><div> toclient-src-groups: 200</div><div> toclient-dst-groups: 200</div><div> - sgh-mpm-context: auto</div><div> - inspection-recursion-limit: 3000</div><div><br></div><div>mpm-algo: hs</div><div>spm-algo: hs</div><div><br></div><div>threading:</div><div> set-cpu-affinity: yes</div><div> cpu-affinity:</div><div> - management-cpu-set:</div><div> cpu: [ 1,21 ] # include only these cpus in affinity settings</div><div> mode: "balanced"</div><div> prio:</div><div> default: "low"</div><div> - worker-cpu-set:</div><div> cpu: [ 5,7,9,11,13,15,17,19,23,25,27,29,31,33,35,37,39 ]</div><div> mode: "exclusive"</div><div> # Use explicitely 3 threads and don't compute number by using</div><div> # detect-thread-ratio variable:</div><div> # threads: 3</div><div> prio:</div><div> default: "high"</div><div> detect-thread-ratio: 1.5</div><div><br></div><div>napatech:</div><div> hba: -1</div><div> use-all-streams: yes</div></div><div><br></div><div>From my "ntservice.ini" file:<br><div>HostBufferPollInterval = 100</div><div>HostBufferSegmentSizeRx = default</div><div>HostBufferSegmentTimeOut = 1000</div><div>HostBuffersRx = [0,0,0],[17,2048,1] </div></div><div><br></div><div>My NTPL file is:<br><div>delete = all</div><div>hashmode=hash5tuplesorted</div><div>Setup[NUMANode=1] = StreamId==(0..16)</div><div>assign[streamid=(0..16)]=all</div><div>Deduplication [DynOffset = Layer2And3HeaderSize; Offset = 16] = Port == 0</div></div><div><br></div><div>Here's a snippet from the stats.log - at a point where I had 5 CPUs and 5 host buffers dropping packets.</div><div><div>nt0.pkts | Total | 33234531</div><div>nt0.bytes | Total | 27909846228</div><div>nt0.drop | Total | 14615598</div><div>nt1.pkts | Total | 24602782</div><div>nt1.bytes | Total | 21241406511</div><div>nt2.pkts | Total | 27257184</div><div>nt2.bytes | Total | 24469757866</div><div>nt2.drop | Total | 6348</div><div>nt3.pkts | Total | 24319656</div><div>nt3.bytes | Total | 21574261237</div><div>nt4.pkts | Total | 27663068</div><div>nt4.bytes | Total | 25185504694</div><div>nt5.pkts | Total | 30080524</div><div>nt5.bytes | Total | 26568388252</div><div>nt6.pkts | Total | 22950925</div><div>nt6.bytes | Total | 19749641965</div><div>nt7.pkts | Total | 28862367</div><div>nt7.bytes | Total | 25376339092</div><div>nt8.pkts | Total | 29496528</div><div>nt8.bytes | Total | 24543380786</div><div>nt9.pkts | Total | 26201481</div><div>nt9.bytes | Total | 22933047311</div><div>nt10.pkts | Total | 25542345</div><div>nt10.bytes | Total | 22641947527</div><div>nt11.pkts | Total | 25294114</div><div>nt11.bytes | Total | 22047976270</div><div>nt12.pkts | Total | 27476779</div><div>nt12.bytes | Total | 23909136217</div><div>nt12.drop | Total | 30820</div><div>nt13.pkts | Total | 26040161</div><div>nt13.bytes | Total | 22638575761</div><div>nt14.pkts | Total | 26332258</div><div>nt14.bytes | Total | 20614848384</div><div>nt15.pkts | Total | 20398822</div><div>nt15.bytes | Total | 17797433848</div><div>nt15.drop | Total | 5895511</div><div>nt16.pkts | Total | 24904544</div><div>nt16.bytes | Total | 22191856727</div><div>nt16.drop | Total | 405</div><div>decoder.pkts | Total | 450175098</div><div>decoder.bytes | Total | 390994516538</div><div>decoder.invalid | Total | 44193</div><div>decoder.ipv4 | Total | 450515575</div><div>decoder.ipv6 | Total | 701</div><div>decoder.ethernet | Total | 450175098</div><div>decoder.tcp | Total | 396577426</div><div>decoder.udp | Total | 41434420</div><div>decoder.icmpv4 | Total | 28341</div><div>decoder.gre | Total | 32</div><div>decoder.vlan | Total | 450175098</div><div>decoder.vlan_qinq | Total | 450175098</div><div>decoder.teredo | Total | 701</div><div>decoder.avg_pkt_size | Total | 868</div><div>decoder.max_pkt_size | Total | 1526</div><div>flow.tcp | Total | 6112630</div><div>flow.udp | Total | 756331</div><div>defrag.ipv4.fragments | Total | 1252944</div><div>defrag.ipv4.reassembled | Total | 384591</div><div>decoder.icmpv4.ipv4_unknown_ver | Total | 26</div><div>decoder.tcp.hlen_too_small | Total | 4</div><div>decoder.tcp.opt_invalid_len | Total | 17</div><div>decoder.vlan.unknown_type | Total | 44146</div><div>tcp.sessions | Total | 4824148</div><div>tcp.pseudo | Total | 806</div><div>tcp.syn | Total | 4899087</div><div>tcp.synack | Total | 2350078</div><div>tcp.rst | Total | 1273503</div><div>tcp.stream_depth_reached | Total | 10285</div><div>tcp.reassembly_gap | Total | 1091339</div><div>tcp.overlap | Total | 209277</div><div>detect.alert | Total | 16</div><div>app_layer.flow.http | Total | 399740</div><div>app_layer.tx.http | Total | 822952</div><div>app_layer.flow.ftp | Total | 1644</div><div>app_layer.flow.smtp | Total | 4291</div><div>app_layer.tx.smtp | Total | 4791</div><div>app_layer.flow.tls | Total | 1376046</div><div>app_layer.flow.ssh | Total | 783</div><div>app_layer.flow.dns_tcp | Total | 1758</div><div>app_layer.tx.dns_tcp | Total | 2032</div><div>app_layer.flow.enip | Total | 1005</div><div>app_layer.flow.failed_tcp | Total | 114223</div><div>app_layer.flow.dcerpc_udp | Total | 20</div><div>app_layer.flow.dns_udp | Total | 550775</div><div>app_layer.tx.dns_udp | Total | 551434</div><div>app_layer.tx.enip | Total | 1005</div><div>app_layer.flow.failed_udp | Total | 204531</div><div>flow_mgr.closed_pruned | Total | 1663125</div><div>flow_mgr.new_pruned | Total | 4209517</div><div>flow_mgr.est_pruned | Total | 949111</div><div>flow_mgr.bypassed_pruned | Total | 557</div><div>flow.spare | Total | 10482693</div><div>flow.tcp_reuse | Total | 379</div><div>flow_mgr.flows_checked | Total | 9327</div><div>flow_mgr.flows_notimeout | Total | 3491</div><div>flow_mgr.flows_timeout | Total | 5836</div><div>flow_mgr.flows_timeout_inuse | Total | 4874</div><div>flow_mgr.flows_removed | Total | 962</div><div>flow_mgr.rows_checked | Total | 1048576</div><div>flow_mgr.rows_skipped | Total | 1038992</div><div>flow_mgr.rows_empty | Total | 599</div><div>flow_mgr.rows_maxlen | Total | 25</div><div>tcp.reassembly_memuse | Total | 243974776</div><div>dns.memuse | Total | 1588284</div><div>http.memuse | Total | 78622620</div><div>flow.memuse | Total | 382530784</div></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Jan 10, 2018 at 1:17 PM, Peter Manev <span dir="ltr"><<a href="mailto:petermanev@gmail.com" target="_blank">petermanev@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On Wed, Jan 10, 2018 at 11:08 AM, Steve Castellarin<br>
<<a href="mailto:steve.castellarin@gmail.com">steve.castellarin@gmail.com</a>> wrote:<br>
> All,<br>
><br>
> I've been running Suricata 3.1.1 (with Hyperscan) on an Ubuntu 14.04.5 64bit<br>
> system with an older Napatech driver set for quite a while with no issues.<br>
> The system is running dual E5-2660 v3 @2.60Ghz processors with 128Gb of<br>
> memory. I've gone ahead and upgraded the Napatech drivers to 10.0.4 and<br>
> downloaded/compiled Suricata 4.0.3. I've done the best I can to copy<br>
> configuration settings from the 3.1.1 suricata.yaml to the 4.0.3<br>
> suricata.yaml. I run Suricata by issuing:<br>
> /usr/bin/suricata -c /etc/suricata/suricata.yaml --napatech --runmode<br>
> workers -D<br>
><br>
> I continue to see issues where Suricata will run for a time when I notice<br>
> one of the CPUs hitting 100%, and stay there. Then when running Napatech's<br>
> "profiling" command I'll see one of the host buffers dropping 100% of the<br>
> packets. As time goes along another CPU/host buffer will have the same<br>
> issue, etc, etc.<br>
><br>
> I've been banging my head over this for a couple weeks with no success,<br>
> other than killing the Suricata process then restarting - to only have this<br>
> issue crop up again.<br>
><br>
> One thing I notice, when I issue the "kill `pidof suricata`" Suricata will<br>
> take a while to end gracefully. But, it leaves the PID file behind in<br>
> /var/run.<br>
><br>
> Any ideas on how to attack this, before I have to roll back my upgrade?<br>
><br>
<br>
</span>Can you share some more info on your suricata config and any info in<br>
suricata.log/stats.log?<br>
<br>
> Thanks!!<br>
><br>
> ______________________________<wbr>_________________<br>
> Suricata IDS Users mailing list: <a href="mailto:oisf-users@openinfosecfoundation.org">oisf-users@<wbr>openinfosecfoundation.org</a><br>
> Site: <a href="http://suricata-ids.org" rel="noreferrer" target="_blank">http://suricata-ids.org</a> | Support: <a href="http://suricata-ids.org/support/" rel="noreferrer" target="_blank">http://suricata-ids.org/<wbr>support/</a><br>
> List: <a href="https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users" rel="noreferrer" target="_blank">https://lists.<wbr>openinfosecfoundation.org/<wbr>mailman/listinfo/oisf-users</a><br>
><br>
> Conference: <a href="https://suricon.net" rel="noreferrer" target="_blank">https://suricon.net</a><br>
> Trainings: <a href="https://suricata-ids.org/training/" rel="noreferrer" target="_blank">https://suricata-ids.org/<wbr>training/</a><br>
<span class="HOEnZb"><font color="#888888"><br>
<br>
<br>
--<br>
Regards,<br>
Peter Manev<br>
</font></span></blockquote></div><br></div>