[Oisf-users] Hardware specs for monitoring 100GB

Michał Purzyński michalpurzynski1 at gmail.com
Fri Oct 18 22:31:12 UTC 2019


That's actually what we've seen so far - there might be 100Gbit interfaces
but the real-world traffic is much less.

I'd highly (highly) recommend two cards per server if you're going 2 CPU
(and 1 card if there's one CPU) for NUMA affinity, that's critically
important for any kind of performance.

Intel x722-da2 (slightly preferred) or something from the Mellanox
connectx-5 family will do the job.

Let me shamelessly say that a lot of people had lots of luck configuring
systems according to a howto myself and Peter Manev (pevma) and Eric wrote
a while ago. A couple of things changes since, but mostly on the software
layer and the general direction is still correct.

https://github.com/pevma/SEPTun
https://github.com/pevma/SEPTun-Mark-II/blob/master/SEPTun-Mark-II.rst

I'd say 2 CPU with 1 NIC per CPU should be your basic building block.
There's no overhead once things are configured correctly and the
configuration should be relatively painless.

It's not the performance configuration you will spend most time on, but
tuning the rule set, most likely.

I'd also recommend having some sort of "packet broker" in front of your
cluster that distrbutes traffic among nodes and can be useful for filtering
traffic you do not want to see, to service multiple taps, etc. We use
Arista (ooold) 7150S but there are many more new models both in Arista land
or from different vendors, like Gigamon. Arista tends to be cheaper and
lighter on features.


On Fri, Oct 18, 2019 at 2:40 PM Drew Dixon <dwdixon at umich.edu> wrote:

> > I'd build a cluster instead of a single 100Gbit machine, for reliability
> reasons, unless your space is limited.
>
> That's actually what I'm aiming to do : ) I need to accomodate for maybe
> something in the ballpark of approx. 50Gbit+ on average for starters I'd
> guess w/ room for future growth...rules probably shouldn't be anything too
> crazy I don't think.
>
> I'd like to maybe find a 2x100G or 1x100G NIC that has somewhat low
> administrative overhead that I can hand off to operations folks, so once
> it's running I won't have to deal with tinkering w/ it too much after
> patching the server and updating to new kernel versions.  I've used Myricom
> cards in the past with Suri on smaller ~1-10G links so I'm familiar but I'd
> like to move away from those and I'm not even sure they have anything in
> the 100G market at the moment anyhow.  Are there similar (low admin.
> overhead) NIC vendor options out there now at 100G that folks know of?
>
> What's the administrative overhead when patching/updating etc. when
> running Suri with Intel/Mellanox NIC's? Does Mellanox have a 100G option?
> I believe Intel's is either about to launch or may have already launched?
>
> Best,
>
> -Drew
>
>
>
>
> On Fri, Oct 18, 2019 at 5:13 PM Michał Purzyński <
> michalpurzynski1 at gmail.com> wrote:
>
>> Let's argue for a moment that using dedicated capture cards is not
>> necessary anymore, because your vanilla Linux has all you need, especially
>> with Suricata 5.0 and XDP. How does that sound??
>>
>> I'd build a cluster instead of a single 100Gbit machine, for reliability
>> reasons, unless your space is limited.
>>
>> Intel and Mellanox 40Gbit cards can handle 20-40Gbit/sec on a fairly
>> commodity hardware. It totally depends on your rules, of course.
>>
>> Yes, everyone was expecting that ;)
>>
>>
>>
>> On Fri, Oct 18, 2019 at 8:38 AM Drew Dixon <dwdixon at umich.edu> wrote:
>>
>>> Hi, I wanted to revive this thread as I'm currently exploring the same-
>>> 100G+ w/ Suricata.  I'm specifically interested in NIC recommendations,
>>> here I see a "Napatech NT100E3-1-PTP" is being used which I will look into
>>> a bit, I also saw they offer a compact "NT200A02-SCC-2×100/40" which may be
>>> ideal for my purposes, however I wanted to poll the community a bit-
>>>
>>> Do folks have other 100G NIC recommendations that play very well with
>>> Suricata w/ minimal administrative overhead?  It could maybe even be
>>> something like 2x40G if there are more options presently, but 1x100G would
>>> likely really be best looking forward.
>>>
>>> I did see that Intel is about to (or may have by now) release their 800
>>> series NIC's w/ a 100G option FWIW.  In general I haven't heard much of
>>> anything on the top 100G NIC recommendations w/ Suricata.
>>>
>>> Many thanks in advance-
>>>
>>> Best,
>>>
>>> -Drew
>>>
>>> On Thu, Aug 1, 2019 at 5:28 PM Peter Manev <petermanev at gmail.com> wrote:
>>>
>>>> @Daniel
>>>> What type of traffic is that and what rules are you planing on using?
>>>>
>>>>
>>>> Thanks
>>>>
>>>>
>>>> On 1 Aug 2019, at 22:19, Nelson, Cooper <cnelson at ucsd.edu> wrote:
>>>>
>>>> Should be fine for ISP traffic.
>>>>
>>>>
>>>>
>>>> We are doing 20Gbit with 48 worker threads on an older AMD Piledriver
>>>> box and it’s around 10-15% loaded with the ‘ondemand’ CPU governor.
>>>>
>>>>
>>>>
>>>> Suricata is primarily I/O bound if you are using the Hyperscan matcher
>>>> and given you have a more modern bus and caching sub-system than us you
>>>> should be under 50% CPU @peak.  This is my personal sizing recommendation
>>>> to keep packet drops under 1%.
>>>>
>>>>
>>>>
>>>> If you are having performance issues or packet loss; make sure you have
>>>> flow bypass for tcp and tls.
>>>>
>>>>
>>>>
>>>> -Coop
>>>>
>>>>
>>>>
>>>> *From:* Oisf-users <oisf-users-bounces at lists.openinfosecfoundation.org>
>>>> *On Behalf Of *Daniel Wallmeyer
>>>> *Sent:* Thursday, August 1, 2019 1:14 PM
>>>> *To:* 'oisf-users at lists.openinfosecfoundation.org' <
>>>> oisf-users at lists.openinfosecfoundation.org>
>>>> *Subject:* [Oisf-users] Hardware specs for monitoring 100GB
>>>>
>>>>
>>>>
>>>> Hey fellow mobsters,
>>>>
>>>>
>>>>
>>>> Looking to verify that we have spec’d our hardware correctly for
>>>> monitoring 100GB:
>>>>
>>>>
>>>>
>>>> 2 x Intel(R) Xeon(R) Gold 6136 CPU
>>>>
>>>> 256GB of RAM
>>>>
>>>> Napatech NT100E3-1-PTP
>>>>
>>>>
>>>>
>>>> The traffic will be fed via a single network tap.
>>>>
>>>>
>>>>
>>>> Will this be enough hardware to deal with 100Gb/s of traffic?
>>>>
>>>> At the very least it would be great to know if the CPU and RAM is
>>>> enough, we can work with Napatech to get the right card.
>>>>
>>>>
>>>>
>>>> Thanks,
>>>>
>>>> Dan
>>>>
>>>> This message and attachments may contain confidential information. If
>>>> it appears that this message was sent to you by mistake, any retention,
>>>> dissemination, distribution or copying of this message and attachments is
>>>> strictly prohibited. Please notify the sender immediately and permanently
>>>> delete the message and any attachments.
>>>>
>>>> . . . . .
>>>>
>>>> _______________________________________________
>>>> Suricata IDS Users mailing list: oisf-users at openinfosecfoundation.org
>>>> Site: http://suricata-ids.org | Support:
>>>> http://suricata-ids.org/support/
>>>> List:
>>>> https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>>>>
>>>> Conference: https://suricon.net
>>>> Trainings: https://suricata-ids.org/training/
>>>>
>>>> _______________________________________________
>>>> Suricata IDS Users mailing list: oisf-users at openinfosecfoundation.org
>>>> Site: http://suricata-ids.org | Support:
>>>> http://suricata-ids.org/support/
>>>> List:
>>>> https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>>>>
>>>> Conference: https://suricon.net
>>>> Trainings: https://suricata-ids.org/training/
>>>
>>> _______________________________________________
>>> Suricata IDS Users mailing list: oisf-users at openinfosecfoundation.org
>>> Site: http://suricata-ids.org | Support:
>>> http://suricata-ids.org/support/
>>> List:
>>> https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>>>
>>> Conference: https://suricon.net
>>> Trainings: https://suricata-ids.org/training/
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20191018/0efb279c/attachment-0001.html>


More information about the Oisf-users mailing list