[Oisf-users] Performance on multiple CPUs

Dave Remien dave.remien at gmail.com
Wed Aug 3 04:33:14 UTC 2011


How much memory is in your new box? You can test how fast the disk can
deliver the 6GB file by dumping the caches (sysctl -w  vm.drop_caches=3)

 time dd if=file of=/dev/null bs=128k

(among many other possibilities).  I've seen the same thing (suricata
performance flattening out on boxes with many CPUs). Your I/O should be much
faster than suricata can run though a pcap (I typically benchmark on boxes
with RAID controllers that can run 400-500MBytes/sec) to make sure that I'm
not benchmarking the disk. I also use large pcaps (750GB has been a useful
size). Suricata has tuning parameters for the threading operation; I'd
recommend reading up on 'em (Will? Wanna jump in here, since Victor's on



On Tue, Aug 2, 2011 at 9:20 PM, Gene Albin <gene.albin at gmail.com> wrote:

> So I just installed Suricata on one of our research computers with lots of
> cores available.  I'm looking to see what kind of performance boost I get as
> I bump up the CPU's. After my first run I was surprised to see that I didn't
> get much of a boost when going from 8 to 32 CPUs.  I was running a 6GB pcap
> file with a about 17k rules loaded.  The first run on 8 cores took 190sec.
> The second run on 32 cores took 170 sec.  Looks like something other than
> CPU is the bottle neck.
> My first guess is Disk IO.  Any recommendations on how I could check/verify
> that guess?
> Gene
> --
> Gene Albin
> gene.albin at gmail.com
> _______________________________________________
> Oisf-users mailing list
> Oisf-users at openinfosecfoundation.org
> http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users

"Of course, someone who knows more about this will correct me if I'm
wrong, and someone who knows less will correct me if I'm right."
David Palmer (palmer at tybalt.caltech.edu)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20110802/ee3f1baf/attachment-0002.html>

More information about the Oisf-users mailing list