[Oisf-users] Fast replay of pcap files

Dave Remien dave.remien at gmail.com
Sat Jul 16 16:03:01 UTC 2011


Gene,

For example, here's what I get on my desktop:

dave:/local/try7> time dd if=/v/110421.pcap of=/dev/null bs=128k
335693+1 records in
335693+1 records out
44000013514 bytes (44 GB) copied, 195.631 s, 225 MB/s

real    3m15.634s
user    0m0.216s
sys     0m24.287s
dave> ll /v/110421.pcap
-rw-r--r-- 1 dave users 44000013514 Jul  7 08:17 /v/110421.pcap

/v in this case is an ext4 FS on a fairly spiffy SSD drive....

dave:/local/try7> lsscsi

[3:0:0:0]    disk    ATA      ATP Velocity SII 1002  /dev/sdc

YMMV 8-)

Cheers,

Dave

On Sat, Jul 16, 2011 at 9:53 AM, Dave Remien <dave.remien at gmail.com> wrote:

> Gene,
>
> Try:
>
>      time dd if=file of=/dev/null bs=128k
>
> That'll tell you how fast your I/O is. If it's less than 70-80MB a second,
> I doubt that you're exercising Suricata to it's capacity.
>
> There are a few small system changes that can improve I/O performance to a
> small degree; try
>
>     find /sys | grep read_ahead_kb
>
> The typical value in these is 128 (kb); you can set it up to 512 or so.
>
> Here's the thing about doing this in a VM - you're reading from a file,
> which is living in a file (your OS on the VM's file system) under another
> OS. Everything you're reading is going thru two FS reads, and it happens as
> each OS "gets there". You'd be far better off to be testing this on a system
> directly on the hardware. Unless your intention is to prove that everything
> runs more slowly in a VM, of course 8-). Those of us who enjoy this kind of
> stuff do a similar thing by creating a large, empty file, doing a mke2fs on
> it, and loop mounting it as a partition, then copying the file you want to
> read into the loop-mounted partition.  Voila, slow FS access...
> Realistically, you can do this several times. Eventually the file reading
> will slow to a crawl.
>
> Cheers,
>
> Dave
>
> On Fri, Jul 15, 2011 at 11:00 AM, Gene Albin <gene.albin at gmail.com> wrote:
>
>> Victor,
>>   Added the --runmode=autofp switch and while the CPU cycles across all
>> four cores did increase to a range between 8 and 25 the overall time to
>> complete the pcap run was only marginally better at around 3:40.
>>
>>   I'm looking over the disk stats to try and determine if I'm I/O limited.
>>  I'm getting average rates of 28MB/sec read and about 250 I/O reads/sec.
>>  (minimal writes/sec)  I'll check with the sysadmin guys to see if this is
>> high for this box, but I don't think it is.
>>
>>   Other than that I'm not sure where the bottleneck could be.
>>
>>  As a side note I did try uncommenting the "#runmode: auto" line in the
>> suricata.yaml file yesterday and found it made no apparent difference.
>>
>> Thanks,
>> Gene
>>
>>
>> On Thu, Jul 14, 2011 at 11:44 PM, Victor Julien <victor at inliniac.net>wrote:
>>
>>> Gene, can you try adding the option "--runmode=autofp" to your command
>>> line? It does about the same, but with a different threading
>>> configuration (runmode).
>>>
>>> Cheers,
>>> Victor
>>>
>>> On 07/15/2011 04:24 AM, Gene Albin wrote:
>>> > Dave,
>>> >   Thanks for the reply.  It's possible that I'm I/O limited.  Quite
>>> simply I
>>> > drew my conclusion from the fact that when I run the same 6GB pcap file
>>> > through Suricata via tcpreplay the CPU utilization rises up to between
>>> 13
>>> > and 22 percent per core (4 cores).  It completes in just over 2
>>> minutes.
>>> >  Once complete it drops back down to 0%.  Looking at the processes
>>> during
>>> > the run I notice that Suricata and tcpreplay are both in the 60% range
>>> > (using top the process table shows the average across all CPUs, I
>>> think).
>>> >  However, when I run Suricata with the -r <filename> option the CPU
>>> > utilization on all 4 CPU's barely increases above 1, which is where it
>>> > usually sits when I run a live capture on this interface and the run
>>> takes
>>> > around 4 minutes to complete.
>>> >
>>> >   As for the hardware I'm running this in a VM hosted on an ESX server.
>>>  OS
>>> > is CentOS 5.6, 4 cores and 4GB ram.  Pcaps are on a 1.5TB drive
>>> attached to
>>> > the server via fiberchannel (I think).  Not sure how I can measure the
>>> > latency, but up to this point I haven't had an issue.
>>> >
>>> >   For ruleset I'm using just the open ET ruleset optimized for
>>> suricata.
>>> >  That's 46 rule files and 11357 rules loaded.  My suricata.yaml file is
>>> for
>>> > the most part stock.  (attached for your viewing pleasure)
>>> >
>>> >  So I'm really at a loss here why the -r option runs slower than
>>> tcpreplay
>>> > --topspeed.  The only explanation I see is that -r replays the file at
>>> the
>>> > same speed it was recorded.
>>> >
>>> >   Appreciate any insight you could offer...
>>> >
>>> > Gene
>>> >
>>> >
>>> > On Thu, Jul 14, 2011 at 6:50 PM, Dave Remien <dave.remien at gmail.com>
>>> wrote:
>>> >
>>> >>
>>> >>
>>> >> On Thu, Jul 14, 2011 at 7:14 PM, Gene Albin <gene.albin at gmail.com>
>>> wrote:
>>> >>
>>> >>> Hi all,
>>> >>>   I'm experimenting with replaying various pcap files in Suricata.
>>>  It
>>> >>> appears that the pcap files are replaying at the same speed they were
>>> >>> recorded.  I'd like to be able to replay them faster so that 1) I can
>>> stress
>>> >>> the detection engine, and 2) expedite post-event analysis.
>>> >>>
>>> >>>   One way to accomplish this is by using tcpreplay -t, but when
>>> running on
>>> >>> the same machine that takes lots of cycles away from Suricata and
>>> sends the
>>> >>> recorded pcap traffic onto an interface that already has live
>>> traffic.
>>> >>>
>>> >>>   Is there some other way to replay captured traffic through Suricata
>>> at
>>> >>> an accelerated speed?
>>> >>>
>>> >>
>>> >> Hmm - I've done pretty extensive replay of pcaps with Suricata. I have
>>> a
>>> >> 750GB pcap that was recorded over a 9 hour time range, and takes about
>>> 3.5
>>> >> hours to be replayed through Suricata. The alerts generated show the
>>> pcap
>>> >> time (i.e., over the 9 hour range).  The machine replaying the pcap is
>>> a 16
>>> >> core box with a RAID array.
>>> >>
>>> >> Is it possible that you're I/O limited?
>>> >>
>>> >> So... I guess I'd ask about your configuration - # of CPUs, disk
>>> speeds,
>>> >> proc types, rule set, suricata.yaml?
>>> >>
>>> >>  Cheers,
>>> >>
>>> >> Dave
>>> >>
>>> >>
>>> >>> --
>>> >>> Gene Albin
>>> >>> gene.albin at gmail.com
>>> >>> gene_albin at bigfoot.com
>>> >>>
>>> >>> _______________________________________________
>>> >>> Oisf-users mailing list
>>> >>> Oisf-users at openinfosecfoundation.org
>>> >>> http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>>> >>>
>>> >>>
>>> >>
>>> >>
>>> >> --
>>> >> "Of course, someone who knows more about this will correct me if I'm
>>> >> wrong, and someone who knows less will correct me if I'm right."
>>> >> David Palmer (palmer at tybalt.caltech.edu)
>>> >>
>>> >>
>>> >
>>> >
>>> >
>>> >
>>> > _______________________________________________
>>> > Oisf-users mailing list
>>> > Oisf-users at openinfosecfoundation.org
>>> > http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>>>
>>>
>>> --
>>> ---------------------------------------------
>>> Victor Julien
>>> http://www.inliniac.net/
>>> PGP: http://www.inliniac.net/victorjulien.asc
>>> ---------------------------------------------
>>>
>>> _______________________________________________
>>> Oisf-users mailing list
>>> Oisf-users at openinfosecfoundation.org
>>> http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>>>
>>
>>
>>
>> --
>> Gene Albin
>> gene.albin at gmail.com
>> gene_albin at bigfoot.com
>>
>> _______________________________________________
>> Oisf-users mailing list
>> Oisf-users at openinfosecfoundation.org
>> http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>>
>>
>
>
> --
> "Of course, someone who knows more about this will correct me if I'm
> wrong, and someone who knows less will correct me if I'm right."
> David Palmer (palmer at tybalt.caltech.edu)
>
>


-- 
"Of course, someone who knows more about this will correct me if I'm
wrong, and someone who knows less will correct me if I'm right."
David Palmer (palmer at tybalt.caltech.edu)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20110716/a3e48e48/attachment-0002.html>


More information about the Oisf-users mailing list