[Oisf-users] Suricata, modern CPU and scheduling. And NUMA.

Cooper F. Nelson cnelson at ucsd.edu
Sat Nov 1 06:14:31 UTC 2014

Hash: SHA1

On 10/31/2014 6:34 PM, Michal Purzynski wrote:

> There are three possible scenarios here:
> 1. Leave HT enabled, don't touch affinity, leave scheduling to Linux
> In this setup Linux sometimes schedules workers on a "virtual" (HT)
> cores. And that is bad, because two workers compete for resources of the
> same physical core. Am I wrong here? I've seen Linux doing that.
> Also, cache coherency sucks here. L2 and L3 to the rescue, a bit. And
> migrating thread between cores should invalidate TLB (partially).

All cores on a HT system are virtual.  The physical cores are not
exposed to the OS.  Treat them as you would physical cores that share
the same cache.  That is the whole point of HT.

> 2. Disable HT, don't touch affinity, leave scheduling to Linux.
> Haven't tried it yet. It should help in theory.

It will not.  Quite the contrary in fact.

> 3. Pin threads to physical cores.
> But, Suricata uses not just 16 threads for workers (in my setup). There
> are different "management/housekeeping" ones as well.

That's what cpu affinity is.  You pin the decode threads to cores and
let the scheduler take care of the rest.  If your hardware isn't
over-subscribed this shouldn't be an issue.

> Or maybe pin 16 workers to cores and let the rest float as they wish?

That what works the best.  IF it doesn't you either need to reduce
packets-per-second-per-core or lower the number of rules you are running.

> _______________________________________________
> Suricata IDS Users mailing list: oisf-users at openinfosecfoundation.org
> Site: http://suricata-ids.org | Support: http://suricata-ids.org/support/
> List: https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
> Training now available: http://suricata-ids.org/training/

- -- 
Cooper Nelson
Network Security Analyst
UCSD ACT Security Team
cnelson at ucsd.edu x41042
Version: GnuPG v2.0.17 (MingW32)


More information about the Oisf-users mailing list