<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<meta name="Generator" content="Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:12.0pt;
font-family:"Times New Roman",serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
p.msonormal0, li.msonormal0, div.msonormal0
{mso-style-name:msonormal;
mso-margin-top-alt:auto;
margin-right:0in;
mso-margin-bottom-alt:auto;
margin-left:0in;
font-size:12.0pt;
font-family:"Times New Roman",serif;}
span.EmailStyle18
{mso-style-type:personal;
font-family:"Calibri",sans-serif;
color:#1F497D;}
span.EmailStyle19
{mso-style-type:personal-compose;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-size:10.0pt;
font-family:"Calibri",sans-serif;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang="EN-US" link="blue" vlink="purple">
<div class="WordSection1">
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D">+1 to this design, we are using it successfully here on a 20Gig AMD64 deployment, ~.1% packet loss with *<b>everything</b>* turned on. All ETPRO sigs, full json
logging and file logging/extraction for all supported protocols.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D">The “secret sauce” to high performance/low packet drop suricata builds is to spec it out so no one component ever goes over 50% load average for anything more
than about a minute.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D">So personally, for 100Gbit I would follow the SEPTUN guides and use two 40G dual-port Intel NICS (like the x722-da2 as recommended). One RSS queue per interface
should be fine for that configuration (~25 Gbit max per port) and should also address the tcp.pkt_on_wrong_thread issue; if you are using multiple RSS queues make sure to set the hashing to ‘sd’ only for all protocols via ethtool. As Michal mentioned, on
“real word” networks, unless you are a Tier1 ISP or R1 research network (like us) you are unlikely to actually see 100Gbs.
<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D">We are actually looking to get rid of our Arista for our next build and just filter stuff in the kernel w/bpf filters.
<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D">-Coop<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><b><span style="font-size:11.0pt;font-family:"Calibri",sans-serif">From:</span></b><span style="font-size:11.0pt;font-family:"Calibri",sans-serif"> Oisf-users <oisf-users-bounces@lists.openinfosecfoundation.org>
<b>On Behalf Of </b>Michal Purzynski<br>
<b>Sent:</b> Friday, October 18, 2019 3:31 PM<br>
<b>To:</b> Drew Dixon <dwdixon@umich.edu><br>
<b>Cc:</b> Daniel Wallmeyer <Daniel.Wallmeyer@cisecurity.org>; oisf-users@lists.openinfosecfoundation.org<br>
<b>Subject:</b> Re: [Oisf-users] Hardware specs for monitoring 100GB<o:p></o:p></span></p>
<p class="MsoNormal"><o:p> </o:p></p>
<div>
<div>
<p class="MsoNormal">That's actually what we've seen so far - there might be 100Gbit interfaces but the real-world traffic is much less.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">I'd highly (highly) recommend two cards per server if you're going 2 CPU (and 1 card if there's one CPU) for NUMA affinity, that's critically important for any kind of performance.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">Intel x722-da2 (slightly preferred) or something from the Mellanox connectx-5 family will do the job.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">Let me shamelessly say that a lot of people had lots of luck configuring systems according to a howto myself and Peter Manev (pevma) and Eric wrote a while ago. A couple of things changes since, but mostly on the software layer and the
general direction is still correct.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal"><a href="https://github.com/pevma/SEPTun">https://github.com/pevma/SEPTun</a><o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><a href="https://github.com/pevma/SEPTun-Mark-II/blob/master/SEPTun-Mark-II.rst">https://github.com/pevma/SEPTun-Mark-II/blob/master/SEPTun-Mark-II.rst</a><o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">I'd say 2 CPU with 1 NIC per CPU should be your basic building block. There's no overhead once things are configured correctly and the configuration should be relatively painless.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">It's not the performance configuration you will spend most time on, but tuning the rule set, most likely.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">I'd also recommend having some sort of "packet broker" in front of your cluster that distrbutes traffic among nodes and can be useful for filtering traffic you do not want to see, to service multiple taps, etc. We use Arista (ooold) 7150S
but there are many more new models both in Arista land or from different vendors, like Gigamon. Arista tends to be cheaper and lighter on features.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
</div>
</div>
</body>
</html>