[Oisf-users] Suricata on 8 cores, ~70K packets/sec

Victor Julien victor at inliniac.net
Tue Feb 15 15:01:17 EST 2011


On 02/15/2011 11:55 AM, Eric Leblond wrote:
> Hi,
> 
> Le mardi 15 février 2011 à 12:44 -0500, Robert Vineyard a écrit :
>> On 02/15/2011 12:09 PM, Eric Leblond wrote:
>>> You may have a look at this post on my blog:
>>> 	http://home.regit.org/?p=438
>>> A git version of suricata is required for the fine tuning described in
>>> the page but you can also play with the threads multiplicator. On a eight
>>> core, you could try something lower like 0.25.
>>
>> After reading your blog post, I'm wondering if perhaps Suricata is running
>> into the same kinds of issues that have plagued the much-delayed
>> multi-threaded Snort 3.0:
>>
>> http://securitysauce.blogspot.com/2009/04/snort-30-beta-3-released.html
>>
>> I'm not sure how much if any code in Suricata is shared with Snort, but I
>> found Marty's analysis here to be very enlightening.
> 
> Multithreading brings some complex issue and it can be very hard to find
> how to deal with it.
> 
> I've continued my investigation on Suricata and I arrive to a simple
> conclusion. It appears that we've got something like a ratio issue
> between the reading capabilities and the treatment capabilities. Two
> core/thread seems to be enough to treat the flow read by one core/thread
> (in pcap file mode). If we have more than two core, a lot of time is
> spent in waiting for data.
> 
> I will try to update my post on the blog as soon as I have significant
> element of proof.

If this is the case pfring is your friend. It allows you to have
multiple reader threads that get packets from the kernel. Pfring has
several ways of dividing packets over the readers. I'd be interested to
see what happens with a run mode where we'd have cores/2 pfring readers
with each 2 or 3 processing threads.

Cheers,
Victor

-- 
---------------------------------------------
Victor Julien
http://www.inliniac.net/
PGP: http://www.inliniac.net/victorjulien.asc
---------------------------------------------



More information about the Oisf-users mailing list