<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Nov 5, 2019 at 7:18 PM Nelson, Cooper <<a href="mailto:cnelson@ucsd.edu">cnelson@ucsd.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Indeed, we are running into associated IO and licensing bottlenecks with the torrent of metadata that is produced. I had to write an asynchronous spooler to copy stored files from a tmpfs partition to long-term storage, for example. Our JSON logging is to a tmpfs partition as well. <br>
<br></blockquote><div><br></div><div><br></div><div>I think there are a couple of bottle neck spots that can be hit in such intense traffic volumes- disk speed, write locks, logoutput in general, bus speed in some cases too, NUMA cross talks.<br>In general limiting what you need to look at is always a good step - ex flush out streaming/video traffic etc. Perf top is your friend :) , "run on empty" see what the performance is without loading any rules that would pinpoint any non inspection related bottlenecks too.</div><div><br>I am trying to find a measurable,consistent, repetitive way of easy figuring out if the system bus becomes a bottle neck and when on huge speeds. Any suggestions or pointers are welcome :)<br> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
-Coop<br>
<br>
-----Original Message-----<br>
From: Peter Manev <<a href="mailto:petermanev@gmail.com" target="_blank">petermanev@gmail.com</a>> <br>
Sent: Tuesday, November 5, 2019 12:15 AM<br>
To: Nelson, Cooper <<a href="mailto:cnelson@ucsd.edu" target="_blank">cnelson@ucsd.edu</a>><br>
Cc: Michał Purzyński <<a href="mailto:michalpurzynski1@gmail.com" target="_blank">michalpurzynski1@gmail.com</a>>; Drew Dixon <<a href="mailto:dwdixon@umich.edu" target="_blank">dwdixon@umich.edu</a>>; Daniel Wallmeyer <<a href="mailto:Daniel.Wallmeyer@cisecurity.org" target="_blank">Daniel.Wallmeyer@cisecurity.org</a>>; <a href="mailto:oisf-users@lists.openinfosecfoundation.org" target="_blank">oisf-users@lists.openinfosecfoundation.org</a><br>
Subject: Re: [Oisf-users] Hardware specs for monitoring 100GB<br>
<br>
<br>
We have recently experimented with AFPv2 IPS set up and Trex and were able to achieve 40Gbps throughput (Intel based CPU/NIC), (doc reminder for me) It is not always trivial esp at 100Gbps as it becomes a major single point of failure as well so there are a lot of caveats to consider and test(HA/Fail over/log writing/shipping etc..)<br>
<br>
<br>
--<br>
Regards,<br>
Peter Manev<br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><div>Regards,</div>
<div>Peter Manev</div></div></div>