[Oisf-users] Hardware specs for monitoring 100GB
Peter Manev
petermanev at gmail.com
Tue Nov 5 08:14:35 UTC 2019
On Mon, Oct 28, 2019 at 5:02 PM Nelson, Cooper <cnelson at ucsd.edu> wrote:
>
> +1 to this design, we are using it successfully here on a 20Gig AMD64 deployment, ~.1% packet loss with *everything* turned on. All ETPRO sigs, full json logging and file logging/extraction for all supported protocols.
>
>
>
> The “secret sauce” to high performance/low packet drop suricata builds is to spec it out so no one component ever goes over 50% load average for anything more than about a minute.
>
>
>
> So personally, for 100Gbit I would follow the SEPTUN guides and use two 40G dual-port Intel NICS (like the x722-da2 as recommended). One RSS queue per interface should be fine for that configuration (~25 Gbit max per port) and should also address the tcp.pkt_on_wrong_thread issue; if you are using multiple RSS queues make sure to set the hashing to ‘sd’ only for all protocols via ethtool. As Michal mentioned, on “real word” networks, unless you are a Tier1 ISP or R1 research network (like us) you are unlikely to actually see 100Gbs.
>
>
>
> We are actually looking to get rid of our Arista for our next build and just filter stuff in the kernel w/bpf filters.
>
>
>
> -Coop
We have recently experimented with AFPv2 IPS set up and Trex and were
able to achieve 40Gbps throughput (Intel based CPU/NIC), (doc reminder
for me)
It is not always trivial esp at 100Gbps as it becomes a major single
point of failure as well so there are a lot of caveats to consider and
test(HA/Fail over/log writing/shipping etc..)
>
>
>
> From: Oisf-users <oisf-users-bounces at lists.openinfosecfoundation.org> On Behalf Of Michal Purzynski
> Sent: Friday, October 18, 2019 3:31 PM
> To: Drew Dixon <dwdixon at umich.edu>
> Cc: Daniel Wallmeyer <Daniel.Wallmeyer at cisecurity.org>; oisf-users at lists.openinfosecfoundation.org
> Subject: Re: [Oisf-users] Hardware specs for monitoring 100GB
>
>
>
> That's actually what we've seen so far - there might be 100Gbit interfaces but the real-world traffic is much less.
>
>
>
> I'd highly (highly) recommend two cards per server if you're going 2 CPU (and 1 card if there's one CPU) for NUMA affinity, that's critically important for any kind of performance.
>
>
>
> Intel x722-da2 (slightly preferred) or something from the Mellanox connectx-5 family will do the job.
>
>
>
> Let me shamelessly say that a lot of people had lots of luck configuring systems according to a howto myself and Peter Manev (pevma) and Eric wrote a while ago. A couple of things changes since, but mostly on the software layer and the general direction is still correct.
>
>
>
> https://github.com/pevma/SEPTun
>
> https://github.com/pevma/SEPTun-Mark-II/blob/master/SEPTun-Mark-II.rst
>
>
>
> I'd say 2 CPU with 1 NIC per CPU should be your basic building block. There's no overhead once things are configured correctly and the configuration should be relatively painless.
>
>
>
> It's not the performance configuration you will spend most time on, but tuning the rule set, most likely.
>
>
>
> I'd also recommend having some sort of "packet broker" in front of your cluster that distrbutes traffic among nodes and can be useful for filtering traffic you do not want to see, to service multiple taps, etc. We use Arista (ooold) 7150S but there are many more new models both in Arista land or from different vendors, like Gigamon. Arista tends to be cheaper and lighter on features.
>
>
>
>
>
> _______________________________________________
> Suricata IDS Users mailing list: oisf-users at openinfosecfoundation.org
> Site: http://suricata-ids.org | Support: http://suricata-ids.org/support/
> List: https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>
> Conference: https://suricon.net
> Trainings: https://suricata-ids.org/training/
--
Regards,
Peter Manev
More information about the Oisf-users
mailing list