[Oisf-users] Question about cpu-affinity
Peter Manev
petermanev at gmail.com
Mon Mar 5 06:30:30 UTC 2018
On Mon, Mar 5, 2018 at 8:04 AM, Cooper F. Nelson <cnelson at ucsd.edu> wrote:
> On 3/2/2018 3:03 AM, Eric Leblond wrote:
>
> Or maybe allow defining named cpu sets and allow assigning those to
> af-packet interface configs:
>
> - cpu-set
> name: af-packet-eth0
> cpu: [ 0, 2, 4, 6, 8, 10, 12, 14]
> mode: "exclusive"
> - cpu-set
> name: af-packet-eth1
> cpu: [1, 3, 5, 7, 9, 11, 13, 15 ]
> mode: "exclusive"
>
>
>
> af-packet:
> - interface: eth0
> cluster-id: 99
> cpu-set: "af-packet-eth0"
> - interface: eth1
> cluster-id: 98
> cpu-set: "af-packet-eth1"
>
> I like this second proposal better. From what I've seen a few packet
> capture APIs are using the numa node in the capture params, maybe we
> could combined both approach.
>
> I'll vote for this as approach as well.
>
> For some context, I've just got done deploying a 64 core AMD Piledriver
> suricata system. Dual 10 gig Intel NICs (ixgbe driver).
>
> I based my build on Peter Manev's SEPTUN guide, however since AMD doesn't
> support the same caching architecture that Intel does (specifically DCA and
> DDIO) the performance wasn't as expected. Using a single RSS queue simply
> doesn't work, the core is pegged @100% with significant packet loss.
>
I was just tackling a similar AMD based system and can confirm the
same observations/findings.
AMD does not seem to have the same caching architecture indeed.
> What I ended up doing was creating a hybrid deployment that used my standard
> HPC server build, 4 RSS queues/cores per NIC/NUMA node and cluster_flow to
> have suri distribute flows to the remaining 56 cores in software. The
> reason I wanted to interleave the detect threads was to leverage the AMD
> Hypertransport bus to evenly distribute the load from both NICs over the
> whole system.
Seems like a good approach with the set up - is that with using the
low entropy hash key?
>
> --
> Cooper Nelson
> Network Security Analyst
> UCSD ITS Security Team
> cnelson at ucsd.edu x41042
>
--
Regards,
Peter Manev
More information about the Oisf-users
mailing list