<div dir="ltr">The SepTun Mark II we're about to publish should actually behave better on non-IO friendly architectures, like AMD.<div><br></div><div>Speaking personally, this is my private opinion:</div><div><br></div><div>I don't see any deeper thought process about IO optimization on the AMD side, other than increasing the throughput of every interconnect. That's nice, but those aren't even close to being saturated, as we're wasting cycles waiting for cache misses :/</div><div><br></div><div>Intel approached this problem in a much more systematic way.</div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Sun, Mar 4, 2018 at 10:54 PM, Peter Manev <span dir="ltr"><<a href="mailto:petermanev@gmail.com" target="_blank">petermanev@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On Mon, Mar 5, 2018 at 8:48 AM, Cooper F. Nelson <<a href="mailto:cnelson@ucsd.edu">cnelson@ucsd.edu</a>> wrote:<br>
> On 3/4/2018 10:30 PM, Peter Manev wrote:<br>
>> I was just tackling a similar AMD based system and can confirm the<br>
>> same observations/findings.<br>
>> AMD does not seem to have the same caching architecture indeed.<br>
> The "secret ingredient" of the SEPTUN build is the DDIO feature, which<br>
> allows the Intel NICs to copy packets directly into the L3 cache.<br>
><br>
>>> What I ended up doing was creating a hybrid deployment that used my standard<br>
>>> HPC server build, 4 RSS queues/cores per NIC/NUMA node and cluster_flow to<br>
>>> have suri distribute flows to the remaining 56 cores in software. The<br>
>>> reason I wanted to interleave the detect threads was to leverage the AMD<br>
>>> Hypertransport bus to evenly distribute the load from both NICs over the<br>
>>> whole system.<br>
>> Seems like a good approach with the set up - is that with using the<br>
>> low entropy hash key?<br>
> Yes low entropy hash key, current kernel and bundled ixgbe driver. In<br>
> general my build mission statement is to use a low-res timer (100hz),<br>
> virtual hugepages, IRQ coalescing and 4k/2mb blocks to move as much data<br>
> as possible per cpu 'tick'. This allows better cache coherency per<br>
> process timeslice.<br>
><br>
<br>
</span>Ok cool.<br>
I will feedback my findings in the set up i currently am tackling -<br>
although the difference is that i my case it is with a Mellanox NIC.<br>
<div class="HOEnZb"><div class="h5"><br>
<br>
--<br>
Regards,<br>
Peter Manev<br>
______________________________<wbr>_________________<br>
Suricata IDS Users mailing list: <a href="mailto:oisf-users@openinfosecfoundation.org">oisf-users@<wbr>openinfosecfoundation.org</a><br>
Site: <a href="http://suricata-ids.org" rel="noreferrer" target="_blank">http://suricata-ids.org</a> | Support: <a href="http://suricata-ids.org/support/" rel="noreferrer" target="_blank">http://suricata-ids.org/<wbr>support/</a><br>
List: <a href="https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users" rel="noreferrer" target="_blank">https://lists.<wbr>openinfosecfoundation.org/<wbr>mailman/listinfo/oisf-users</a><br>
<br>
Conference: <a href="https://suricon.net" rel="noreferrer" target="_blank">https://suricon.net</a><br>
Trainings: <a href="https://suricata-ids.org/training/" rel="noreferrer" target="_blank">https://suricata-ids.org/<wbr>training/</a></div></div></blockquote></div><br></div>