Victor,<div> Added the --runmode=autofp switch and while the CPU cycles across all four cores did increase to a range between 8 and 25 the overall time to complete the pcap run was only marginally better at around 3:40.</div>
<div><br></div><div> I'm looking over the disk stats to try and determine if I'm I/O limited. I'm getting average rates of 28MB/sec read and about 250 I/O reads/sec. (minimal writes/sec) I'll check with the sysadmin guys to see if this is high for this box, but I don't think it is. </div>
<div><br></div><div> Other than that I'm not sure where the bottleneck could be.</div>
<div><br></div><div> As a side note I did try uncommenting the "#runmode: auto" line in the suricata.yaml file yesterday and found it made no apparent difference.</div><div><br></div><div>Thanks,</div><div>Gene<br>
<br><div class="gmail_quote">On Thu, Jul 14, 2011 at 11:44 PM, Victor Julien <span dir="ltr"><<a href="mailto:victor@inliniac.net" target="_blank">victor@inliniac.net</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Gene, can you try adding the option "--runmode=autofp" to your command<br>
line? It does about the same, but with a different threading<br>
configuration (runmode).<br>
<br>
Cheers,<br>
Victor<br>
<div><div></div><div><br>
On 07/15/2011 04:24 AM, Gene Albin wrote:<br>
> Dave,<br>
> Thanks for the reply. It's possible that I'm I/O limited. Quite simply I<br>
> drew my conclusion from the fact that when I run the same 6GB pcap file<br>
> through Suricata via tcpreplay the CPU utilization rises up to between 13<br>
> and 22 percent per core (4 cores). It completes in just over 2 minutes.<br>
> Once complete it drops back down to 0%. Looking at the processes during<br>
> the run I notice that Suricata and tcpreplay are both in the 60% range<br>
> (using top the process table shows the average across all CPUs, I think).<br>
> However, when I run Suricata with the -r <filename> option the CPU<br>
> utilization on all 4 CPU's barely increases above 1, which is where it<br>
> usually sits when I run a live capture on this interface and the run takes<br>
> around 4 minutes to complete.<br>
><br>
> As for the hardware I'm running this in a VM hosted on an ESX server. OS<br>
> is CentOS 5.6, 4 cores and 4GB ram. Pcaps are on a 1.5TB drive attached to<br>
> the server via fiberchannel (I think). Not sure how I can measure the<br>
> latency, but up to this point I haven't had an issue.<br>
><br>
> For ruleset I'm using just the open ET ruleset optimized for suricata.<br>
> That's 46 rule files and 11357 rules loaded. My suricata.yaml file is for<br>
> the most part stock. (attached for your viewing pleasure)<br>
><br>
> So I'm really at a loss here why the -r option runs slower than tcpreplay<br>
> --topspeed. The only explanation I see is that -r replays the file at the<br>
> same speed it was recorded.<br>
><br>
> Appreciate any insight you could offer...<br>
><br>
> Gene<br>
><br>
><br>
> On Thu, Jul 14, 2011 at 6:50 PM, Dave Remien <<a href="mailto:dave.remien@gmail.com" target="_blank">dave.remien@gmail.com</a>> wrote:<br>
><br>
>><br>
>><br>
>> On Thu, Jul 14, 2011 at 7:14 PM, Gene Albin <<a href="mailto:gene.albin@gmail.com" target="_blank">gene.albin@gmail.com</a>> wrote:<br>
>><br>
>>> Hi all,<br>
>>> I'm experimenting with replaying various pcap files in Suricata. It<br>
>>> appears that the pcap files are replaying at the same speed they were<br>
>>> recorded. I'd like to be able to replay them faster so that 1) I can stress<br>
>>> the detection engine, and 2) expedite post-event analysis.<br>
>>><br>
>>> One way to accomplish this is by using tcpreplay -t, but when running on<br>
>>> the same machine that takes lots of cycles away from Suricata and sends the<br>
>>> recorded pcap traffic onto an interface that already has live traffic.<br>
>>><br>
>>> Is there some other way to replay captured traffic through Suricata at<br>
>>> an accelerated speed?<br>
>>><br>
>><br>
>> Hmm - I've done pretty extensive replay of pcaps with Suricata. I have a<br>
>> 750GB pcap that was recorded over a 9 hour time range, and takes about 3.5<br>
>> hours to be replayed through Suricata. The alerts generated show the pcap<br>
>> time (i.e., over the 9 hour range). The machine replaying the pcap is a 16<br>
>> core box with a RAID array.<br>
>><br>
>> Is it possible that you're I/O limited?<br>
>><br>
>> So... I guess I'd ask about your configuration - # of CPUs, disk speeds,<br>
>> proc types, rule set, suricata.yaml?<br>
>><br>
>> Cheers,<br>
>><br>
>> Dave<br>
>><br>
>><br>
>>> --<br>
>>> Gene Albin<br>
>>> <a href="mailto:gene.albin@gmail.com" target="_blank">gene.albin@gmail.com</a><br>
>>> <a href="mailto:gene_albin@bigfoot.com" target="_blank">gene_albin@bigfoot.com</a><br>
>>><br>
>>> _______________________________________________<br>
>>> Oisf-users mailing list<br>
>>> <a href="mailto:Oisf-users@openinfosecfoundation.org" target="_blank">Oisf-users@openinfosecfoundation.org</a><br>
>>> <a href="http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users" target="_blank">http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>
>>><br>
>>><br>
>><br>
>><br>
>> --<br>
>> "Of course, someone who knows more about this will correct me if I'm<br>
>> wrong, and someone who knows less will correct me if I'm right."<br>
>> David Palmer (<a href="mailto:palmer@tybalt.caltech.edu" target="_blank">palmer@tybalt.caltech.edu</a>)<br>
>><br>
>><br>
><br>
><br>
><br>
><br>
> _______________________________________________<br>
> Oisf-users mailing list<br>
> <a href="mailto:Oisf-users@openinfosecfoundation.org" target="_blank">Oisf-users@openinfosecfoundation.org</a><br>
> <a href="http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users" target="_blank">http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>
<br>
<br>
--<br>
</div></div>---------------------------------------------<br>
<font color="#888888">Victor Julien<br>
<a href="http://www.inliniac.net/" target="_blank">http://www.inliniac.net/</a><br>
PGP: <a href="http://www.inliniac.net/victorjulien.asc" target="_blank">http://www.inliniac.net/victorjulien.asc</a><br>
---------------------------------------------<br>
</font><div><div></div><div><br>
_______________________________________________<br>
Oisf-users mailing list<br>
<a href="mailto:Oisf-users@openinfosecfoundation.org" target="_blank">Oisf-users@openinfosecfoundation.org</a><br>
<a href="http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users" target="_blank">http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br>Gene Albin<br><a href="mailto:gene.albin@gmail.com" target="_blank">gene.albin@gmail.com</a><br><a href="mailto:gene_albin@bigfoot.com" target="_blank">gene_albin@bigfoot.com</a><br>
</div>