So in that case the I/O becomes the bottleneck? I've tried to minimize disk I/O by putting the pcap file into a ramdisk. Not sure if that makes a difference or not. (seems to, though)<br><br>Gene<br><br><div class="gmail_quote">
On Mon, Aug 22, 2011 at 2:48 AM, Victor Julien <span dir="ltr"><<a href="mailto:victor@inliniac.net">victor@inliniac.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div><div></div><div class="h5">On 08/15/2011 06:00 PM, Gene Albin wrote:<br>
> Anoop,<br>
> Indeed. With 48 CPU's in both runmodes in each max-pending-packets<br>
> category I average the following across all of my runs:<br>
><br>
> Runmode: Auto<br>
> MPP Avg PPS StDev<br>
> 50: 27160 590<br>
> 500: 29969 1629<br>
> 5000: 31267 356<br>
> 50000: 31608 358<br>
><br>
> Runmode: AutoFP<br>
> MPP Avg PPS StDev<br>
> 50: 16924 106<br>
> 500: 56572 405<br>
> 5000: 86683 1577<br>
> 50000: 132936 5548<br>
><br>
><br>
> Just reading over my email I don't think I mentioned the variables that<br>
> I'm adjusting. 3 variables here. Runmode, Detect Thread Ratio, and Max<br>
> Pending Packets. Each run that I mention above is at a different DTR from<br>
> .1-1.0 then 1.2, 1.5, 1.7, and 2.0. I was expecting to see something along<br>
> the lines of Eric LeBlond's results on his blog post:<br>
> <a href="http://home.regit.org/2011/02/more-about-suricata-multithread-performance/" target="_blank">http://home.regit.org/2011/02/more-about-suricata-multithread-performance/</a><br>
> but it doesn't look like changing the DTR gave me the significant<br>
> performance increase that he reported. (most likely due to other<br>
> differences in our .yaml files, i.e. cpu_affinity).<br>
><br>
> Thank you for the clarification on the relationship between MPP and the<br>
> cache. That does clear thing up a bit. So you think I should be seeing<br>
> better performance with 48 CPU's than I'm currently getting? Where do you<br>
> think I can make the improvements? My first guess would be in cpu_affinity,<br>
> but that's just a guess.<br>
<br>
</div></div>One limitation of the pcap file runmodes is that no matter how many<br>
cores/threads you throw at it, we still have only a single file reading<br>
thread that has to feed all the rest.<br>
<br>
PF_RING live mode and soon the AF_PACKET fanout live mode allow you to<br>
have many more packet readers. But only for live modes.<br>
<font color="#888888"><br>
--<br>
---------------------------------------------<br>
Victor Julien<br>
<a href="http://www.inliniac.net/" target="_blank">http://www.inliniac.net/</a><br>
PGP: <a href="http://www.inliniac.net/victorjulien.asc" target="_blank">http://www.inliniac.net/victorjulien.asc</a><br>
---------------------------------------------<br>
</font><div><div></div><div class="h5"><br>
_______________________________________________<br>
Oisf-users mailing list<br>
<a href="mailto:Oisf-users@openinfosecfoundation.org">Oisf-users@openinfosecfoundation.org</a><br>
<a href="http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users" target="_blank">http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br>Gene Albin<br><a href="mailto:gene.albin@gmail.com" target="_blank">gene.albin@gmail.com</a><br><br>