[Oisf-users] Fast replay of pcap files
Gene Albin
gene.albin at gmail.com
Fri Jul 15 17:00:03 UTC 2011
Victor,
Added the --runmode=autofp switch and while the CPU cycles across all four
cores did increase to a range between 8 and 25 the overall time to complete
the pcap run was only marginally better at around 3:40.
I'm looking over the disk stats to try and determine if I'm I/O limited.
I'm getting average rates of 28MB/sec read and about 250 I/O reads/sec.
(minimal writes/sec) I'll check with the sysadmin guys to see if this is
high for this box, but I don't think it is.
Other than that I'm not sure where the bottleneck could be.
As a side note I did try uncommenting the "#runmode: auto" line in the
suricata.yaml file yesterday and found it made no apparent difference.
Thanks,
Gene
On Thu, Jul 14, 2011 at 11:44 PM, Victor Julien <victor at inliniac.net> wrote:
> Gene, can you try adding the option "--runmode=autofp" to your command
> line? It does about the same, but with a different threading
> configuration (runmode).
>
> Cheers,
> Victor
>
> On 07/15/2011 04:24 AM, Gene Albin wrote:
> > Dave,
> > Thanks for the reply. It's possible that I'm I/O limited. Quite
> simply I
> > drew my conclusion from the fact that when I run the same 6GB pcap file
> > through Suricata via tcpreplay the CPU utilization rises up to between 13
> > and 22 percent per core (4 cores). It completes in just over 2 minutes.
> > Once complete it drops back down to 0%. Looking at the processes during
> > the run I notice that Suricata and tcpreplay are both in the 60% range
> > (using top the process table shows the average across all CPUs, I think).
> > However, when I run Suricata with the -r <filename> option the CPU
> > utilization on all 4 CPU's barely increases above 1, which is where it
> > usually sits when I run a live capture on this interface and the run
> takes
> > around 4 minutes to complete.
> >
> > As for the hardware I'm running this in a VM hosted on an ESX server.
> OS
> > is CentOS 5.6, 4 cores and 4GB ram. Pcaps are on a 1.5TB drive attached
> to
> > the server via fiberchannel (I think). Not sure how I can measure the
> > latency, but up to this point I haven't had an issue.
> >
> > For ruleset I'm using just the open ET ruleset optimized for suricata.
> > That's 46 rule files and 11357 rules loaded. My suricata.yaml file is
> for
> > the most part stock. (attached for your viewing pleasure)
> >
> > So I'm really at a loss here why the -r option runs slower than
> tcpreplay
> > --topspeed. The only explanation I see is that -r replays the file at
> the
> > same speed it was recorded.
> >
> > Appreciate any insight you could offer...
> >
> > Gene
> >
> >
> > On Thu, Jul 14, 2011 at 6:50 PM, Dave Remien <dave.remien at gmail.com>
> wrote:
> >
> >>
> >>
> >> On Thu, Jul 14, 2011 at 7:14 PM, Gene Albin <gene.albin at gmail.com>
> wrote:
> >>
> >>> Hi all,
> >>> I'm experimenting with replaying various pcap files in Suricata. It
> >>> appears that the pcap files are replaying at the same speed they were
> >>> recorded. I'd like to be able to replay them faster so that 1) I can
> stress
> >>> the detection engine, and 2) expedite post-event analysis.
> >>>
> >>> One way to accomplish this is by using tcpreplay -t, but when running
> on
> >>> the same machine that takes lots of cycles away from Suricata and sends
> the
> >>> recorded pcap traffic onto an interface that already has live traffic.
> >>>
> >>> Is there some other way to replay captured traffic through Suricata
> at
> >>> an accelerated speed?
> >>>
> >>
> >> Hmm - I've done pretty extensive replay of pcaps with Suricata. I have a
> >> 750GB pcap that was recorded over a 9 hour time range, and takes about
> 3.5
> >> hours to be replayed through Suricata. The alerts generated show the
> pcap
> >> time (i.e., over the 9 hour range). The machine replaying the pcap is a
> 16
> >> core box with a RAID array.
> >>
> >> Is it possible that you're I/O limited?
> >>
> >> So... I guess I'd ask about your configuration - # of CPUs, disk speeds,
> >> proc types, rule set, suricata.yaml?
> >>
> >> Cheers,
> >>
> >> Dave
> >>
> >>
> >>> --
> >>> Gene Albin
> >>> gene.albin at gmail.com
> >>> gene_albin at bigfoot.com
> >>>
> >>> _______________________________________________
> >>> Oisf-users mailing list
> >>> Oisf-users at openinfosecfoundation.org
> >>> http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
> >>>
> >>>
> >>
> >>
> >> --
> >> "Of course, someone who knows more about this will correct me if I'm
> >> wrong, and someone who knows less will correct me if I'm right."
> >> David Palmer (palmer at tybalt.caltech.edu)
> >>
> >>
> >
> >
> >
> >
> > _______________________________________________
> > Oisf-users mailing list
> > Oisf-users at openinfosecfoundation.org
> > http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>
>
> --
> ---------------------------------------------
> Victor Julien
> http://www.inliniac.net/
> PGP: http://www.inliniac.net/victorjulien.asc
> ---------------------------------------------
>
> _______________________________________________
> Oisf-users mailing list
> Oisf-users at openinfosecfoundation.org
> http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>
--
Gene Albin
gene.albin at gmail.com
gene_albin at bigfoot.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20110715/6f930148/attachment-0002.html>
More information about the Oisf-users
mailing list