<div dir="ltr">OK, so after some tunning I can handle a few Gbit/sec of traffic - still lots to be done but I'm getting somewhere and don't drop packets most of the time.<div><br></div><div>Need a sanity check on the option I've set.</div>
<div><br></div><div> # On Intel Core2 and Nehalem CPU's enabling this will degrade performance.<br></div><div><br></div><div>^^ could you elaborate on that, as in why? I'm curious. Also, is this "up to Nehalem and not later" ? Everyone seems to recommend the affinity these days.</div>
<div><br></div><div>What do you think about settings like below? I use 16 threads (2 x 8 core Xeon) because HT does not seem to make any sense here. HT works by using unused resources but when Suricata gets hands on my CPUs there won't be much to spare ;) Or am I wrong?</div>
<div><br></div><div> set-cpu-affinity: yes<br></div><div><br></div><div><div> cpu-affinity:</div><div> - management-cpu-set:</div><div> cpu: [ "all" ] </div><div> mode: "balanced"</div>
<div> prio:</div><div> default: "low"</div><div> - receive-cpu-set:</div><div> cpu: [ "all" ] </div><div> mode: "balanced"</div><div> - decode-cpu-set:</div>
<div> cpu: [ "all" ]</div><div> mode: "balanced"</div><div> - stream-cpu-set:</div><div> cpu: [ "all" ]</div><div> mode: "balanced"</div><div> - detect-cpu-set:</div>
<div> cpu: [ "all" ]</div><div> mode: "exclusive" </div><div> prio:</div><div> default: "high"</div><div> - verdict-cpu-set:</div><div> cpu: [ "all" ]</div>
<div> mode: "balanced"</div><div> prio:</div><div> default: "high"</div><div> - reject-cpu-set:</div><div> cpu: [ "all" ]</div><div> mode: "balanced"</div>
<div> prio:</div></div><div><div> default: "low"</div><div> - output-cpu-set:</div><div> cpu: [ "all" ]</div><div> mode: "balanced"</div><div> prio:</div>
<div> default: "medium"</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, Apr 1, 2014 at 11:18 AM, Peter Manev <span dir="ltr"><<a href="mailto:petermanev@gmail.com" target="_blank">petermanev@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div class="">On Tue, Apr 1, 2014 at 1:47 AM, Michał Purzyński<br>
<<a href="mailto:michalpurzynski1@gmail.com">michalpurzynski1@gmail.com</a>> wrote:<br>
> SNF recv pkts: <a href="tel:328287934" value="+48328287934">328287934</a><br>
> SNF drop ring full: 0<br>
><br>
> OK. So. The data ring size is for all wokers, i.e. if I allocate 10GB than I<br>
> need just 10GB of physical memory. What made me think otherwise are tools<br>
> like top, htop, free -m. They actually show num_workers x data_ring_size =<br>
> crazy amount of memory I don't have. But because all workers map the same<br>
> physical memory it does not matter, because all I need is just a virtual<br>
> memory to handle the mapping and that's it.<br>
><br>
> Sending around 3.5Gbit/sec now (in peak, goes down to 2Gbit/sec) and myricom<br>
> says that suricata takes all the packets. Will debug the Suricata<br>
> performance later tomorrow, it's 2AM :-)<br>
><br>
><br>
<br>
<br>
</div>you should also try<br>
- sgh-mpm-context: full<br>
<br>
<br>
Some info from the stats.log would be useful as well for troubleshooting?<br>
<span class=""><font color="#888888"><br>
--<br>
Regards,<br>
Peter Manev<br>
</font></span></blockquote></div><br><br clear="all"><div><br></div>-- <br>Michał Purzyński
</div></div>