<html><body><span style="font-family:Verdana; color:#000000; font-size:10pt;"><div>Yes, this is with equally increased CLUSTER_LEN 16 and 24 ... </div><div><br></div><div>Other suggestions,<br></div><div><br></div><div>Josh<br></div>
<blockquote id="replyBlockquote" webmail="1" style="border-left: 2px solid blue; margin-left: 8px; padding-left: 8px; font-size: 10pt; color: black; font-family: verdana;">
<div id="wmQuoteWrapper">
-------- Original Message --------<br>
Subject: Re: [Oisf-users] Improved PF_RING support, please test!<br>
From: Victor Julien <<a href="mailto:victor@inliniac.net">victor@inliniac.net</a>><br>
Date: Wed, March 09, 2011 11:45 am<br>
To: <a href="mailto:josh@securemind.org">josh@securemind.org</a><br>
Cc: <a href="mailto:oisf-users@openinfosecfoundation.org">oisf-users@openinfosecfoundation.org</a><br>
<br>
Is this with the increased cluster len?<br>
<br>
>From earlier conversation:<br>
<br>
"<br>
This value is hardset in kernel/linux/pf_ring.h<br>
#define CLUSTER_LEN 8<br>
"<br>
<br>
Cheers,<br>
Victor<br>
<br>
<br>
On 03/09/2011 05:40 PM, <a href="mailto:josh@securemind.org">josh@securemind.org</a> wrote:<br>
> Ok, so the patch never put the thread option into the suricata.yaml. I<br>
> reapplied the patch and all was good.<br>
> <br>
> - Ran with 8 threads, everything worked great, CPU utilization seems<br>
> much better 2Gbps of traffic and load was less then 7% across all CPU's<br>
> - Ran with 16 threads, failed<br>
> - Ran with 24 threads, failed<br>
> <br>
> --<br>
> [722] 9/3/2011 -- 11:35:09 - (log-droplog.c:182) <Info><br>
> (LogDropLogInitCtx) -- Drop log output initialized, filename: drop.log<br>
> [722] 9/3/2011 -- 11:35:09 - (source-pfring.c:167) <Info><br>
> (PfringLoadConfig) -- Going to use 16 PF_RING receive threads<br>
> [725] 9/3/2011 -- 11:35:09 - (source-pfring.c:309) <Info><br>
> (ReceivePfringThreadInit) -- (RecvPfring1) Using PF_RING v.4.6.0,<br>
> interface eth0, cluster-id 99<br>
> [727] 9/3/2011 -- 11:35:09 - (source-pfring.c:309) <Info><br>
> (ReceivePfringThreadInit) -- (RecvPfring3) Using PF_RING v.4.6.0,<br>
> interface eth0, cluster-id 99<br>
> [728] 9/3/2011 -- 11:35:09 - (source-pfring.c:309) <Info><br>
> (ReceivePfringThreadInit) -- (RecvPfring4) Using PF_RING v.4.6.0,<br>
> interface eth0, cluster-id 99<br>
> [730] 9/3/2011 -- 11:35:09 - (source-pfring.c:309) <Info><br>
> (ReceivePfringThreadInit) -- (RecvPfring6) Using PF_RING v.4.6.0,<br>
> interface eth0, cluster-id 99<br>
> [731] 9/3/2011 -- 11:35:10 - (source-pfring.c:309) <Info><br>
> (ReceivePfringThreadInit) -- (RecvPfring7) Using PF_RING v.4.6.0,<br>
> interface eth0, cluster-id 99<br>
> [726] 9/3/2011 -- 11:35:10 - (source-pfring.c:309) <Info><br>
> (ReceivePfringThreadInit) -- (RecvPfring2) Using PF_RING v.4.6.0,<br>
> interface eth0, cluster-id 99<br>
> [729] 9/3/2011 -- 11:35:10 - (source-pfring.c:309) <Info><br>
> (ReceivePfringThreadInit) -- (RecvPfring5) Using PF_RING v.4.6.0,<br>
> interface eth0, cluster-id 99<br>
> [722] 9/3/2011 -- 11:35:10 - (stream-tcp.c:344) <Info><br>
> (StreamTcpInitConfig) -- stream "max_sessions": 262144<br>
> [722] 9/3/2011 -- 11:35:10 - (stream-tcp.c:356) <Info><br>
> (StreamTcpInitConfig) -- stream "prealloc_sessions": 32768<br>
> [722] 9/3/2011 -- 11:35:10 - (stream-tcp.c:366) <Info><br>
> (StreamTcpInitConfig) -- stream "memcap": 33554432<br>
> [722] 9/3/2011 -- 11:35:10 - (stream-tcp.c:373) <Info><br>
> (StreamTcpInitConfig) -- stream "midstream" session pickups: disabled<br>
> [722] 9/3/2011 -- 11:35:10 - (stream-tcp.c:381) <Info><br>
> (StreamTcpInitConfig) -- stream "async_oneside": disabled<br>
> [722] 9/3/2011 -- 11:35:10 - (stream-tcp.c:390) <Info><br>
> (StreamTcpInitConfig) -- <a href="http://stream.reassembly">stream.reassembly</a> <<a href="http://stream.reassembly>">http://stream.reassembly></a>;<br>
> "memcap": 67108864<br>
> [722] 9/3/2011 -- 11:35:10 - (stream-tcp.c:410) <Info><br>
> (StreamTcpInitConfig) -- <a href="http://stream.reassembly">stream.reassembly</a> <<a href="http://stream.reassembly>">http://stream.reassembly></a>;<br>
> "depth": 1048576<br>
> [722] 9/3/2011 -- 11:35:10 - (stream-tcp.c:421) <Info><br>
> (StreamTcpInitConfig) -- stream."inline": disabled<br>
> [734] 9/3/2011 -- 11:35:10 - (source-pfring.c:309) <Info><br>
> (ReceivePfringThreadInit) -- (RecvPfring1) Using PF_RING v.4.6.0,<br>
> interface eth0, cluster-id 99<br>
> [732] 9/3/2011 -- 11:35:10 - (source-pfring.c:303) <Error><br>
> (ReceivePfringThreadInit) -- [ERRCODE:<br>
> SC_ERR_PF_RING_SET_CLUSTER_FAILED(37)] - pfring_set_cluster returned -1<br>
> for cluster-id: 99<br>
> [733] 9/3/2011 -- 11:35:10 - (source-pfring.c:303) <Error><br>
> (ReceivePfringThreadInit) -- [ERRCODE:<br>
> SC_ERR_PF_RING_SET_CLUSTER_FAILED(37)] - pfring_set_cluster returned -1<br>
> for cluster-id: 99<br>
> [722] 9/3/2011 -- 11:35:10 - (tm-threads.c:1475) <Error><br>
> (TmThreadWaitOnThreadInit) -- [ERRCODE: SC_ERR_THREAD_INIT(49)] - thread<br>
> "RecvPfring8" closed on initialization.<br>
> [722] 9/3/2011 -- 11:35:10 - (suricata.c:1245) <Error> (main) --<br>
> [ERRCODE: SC_ERR_INITIALIZATION(45)] - Engine initialization failed,<br>
> aborting...<br>
> <br>
> --<br>
> Josh<br>
> <br>
> -------- Original Message --------<br>
> Subject: Re: [Oisf-users] Improved PF_RING support, please test!<br>
> From: Victor Julien <<a href="mailto:victor@inliniac.net">victor@inliniac.net</a> <<a href="mailto:victor@inliniac.net">mailto:victor@inliniac.net</a>>><br>
> Date: Wed, March 09, 2011 10:47 am<br>
> To: <a href="mailto:oisf-users@openinfosecfoundation.org">oisf-users@openinfosecfoundation.org</a><br>
> <<a href="mailto:oisf-users@openinfosecfoundation.org">mailto:oisf-users@openinfosecfoundation.org</a>><br>
> <br>
> Josh, are you setting the <a href="http://pfring.threads">pfring.threads</a> <<a href="http://pfring.threads>">http://pfring.threads></a>;<br>
> option in your suricata.yaml?<br>
> It appears you have it either not set or set to 1. Set it like this:<br>
> <br>
> # PF_RING configuration. for use with native PF_RING support<br>
> # for more info see <a href="http://www.ntop.org/PF_RING.html">http://www.ntop.org/PF_RING.html</a><br>
> pfring:<br>
> # Number of receive threads (>1 will enable experimental flow pinned<br>
> # runmode)<br>
> threads: 8<br>
> <br>
> # Default interface we will listen on.<br>
> interface: eth0<br>
> <br>
> # Default clusterid. PF_RING will load balance packets based on flow.<br>
> # All threads/processes that will participate need to have the same<br>
> # clusterid.<br>
> cluster-id: 99<br>
> <br>
> # Default PF_RING cluster type. PF_RING can load balance per flow or<br>
> per hash.<br>
> # This is only supported in versions of PF_RING > 4.1.1.<br>
> cluster-type: cluster_round_robin<br>
> <br>
> If I set it to 8 pf_ring reports 8 rings...<br>
> <br>
> Cheers,<br>
> Victor<br>
> <br>
> <br>
> On 03/09/2011 12:06 AM, <a href="mailto:jwhite@everisinc.com">jwhite@everisinc.com</a><br>
> <<a href="mailto:jwhite@everisinc.com">mailto:jwhite@everisinc.com</a>> wrote:<br>
> > I've set it to 24, recompiled and it seems to be running ok, (as in<br>
> it's<br>
> > handling 2Gbps worth of packets) but I'm unable to get the number of<br>
> > rings to report as anything but 1. Have I missed somthing or is trully<br>
> > not working correctly. I've run it based on on flow and as packet...<br>
> > still no go.<br>
> ><br>
> > EV-SVR-006:~$ cat /proc/net/pf_ring/info<br>
> > PF_RING Version : 4.6.0 ($Revision: exported$)<br>
> > Ring slots : 32768<br>
> > Slot version : 12<br>
> > Capture TX : No [RX only]<br>
> > IP Defragment : No<br>
> > Transparent mode : Yes (mode 0)<br>
> > Total rings : 1<br>
> > Total plugins : 0<br>
> ><br>
> > - josh<br>
> ><br>
> > ---<br>
> ><br>
> > Date: 3/8/2011 -- 16:52:58 (uptime: 0d, 00h 03m 21s)<br>
> > -------------------------------------------------------------------<br>
> > Counter | TM Name | Value<br>
> > -------------------------------------------------------------------<br>
> > <a href="http://decoder.pkts">decoder.pkts</a> <<a href="http://decoder.pkts>">http://decoder.pkts></a>; <<a href="http://decoder.pkts">http://decoder.pkts</a> >; |<br>
> > Decode1 | 16416387<br>
> > <a href="http://decoder.bytes">decoder.bytes</a> <<a href="http://decoder.bytes>">http://decoder.bytes></a>; <<a href="http://decoder.bytes">http://decoder.bytes</a> >; |<br>
> > Decode1 | 988040217<br>
> > decoder.ipv4 | Decode1 | 16461043<br>
> > decoder.ipv6 | Decode1 | 700<br>
> > <a href="http://decoder.ethernet">decoder.ethernet</a> <<a href="http://decoder.ethernet>">http://decoder.ethernet></a>; <<a href="http://decoder.ethernet">http://decoder.ethernet</a><br>
> >; |<br>
> > Decode1 | 16406387<br>
> > decoder.raw | Decode1 | 0<br>
> > <a href="http://decoder.sll">decoder.sll</a> <<a href="http://decoder.sll>">http://decoder.sll></a>; <<a href="http://decoder.sll">http://decoder.sll</a> >; |<br>
> > Decode1 | 0<br>
> > <a href="http://decoder.tcp">decoder.tcp</a> <<a href="http://decoder.tcp>">http://decoder.tcp></a>; <<a href="http://decoder.tcp">http://decoder.tcp</a> >; |<br>
> > Decode1 | 16406097<br>
> > decoder.udp | Decode1 | 1130<br>
> > decoder.icmpv4 | Decode1 | 3<br>
> > decoder.icmpv6 | Decode1 | 0<br>
> > decoder.ppp | Decode1 | 0<br>
> > decoder.pppoe | Decode1 | 0<br>
> > <a href="http://decoder.gre">decoder.gre</a> <<a href="http://decoder.gre>">http://decoder.gre></a>; <<a href="http://decoder.gre">http://decoder.gre</a> >; |<br>
> > Decode1 | 0<br>
> > decoder.vlan | Decode1 | 0<br>
> > decoder.avg_pkt_size | Decode1 | 60.012753<br>
> > <a href="http://decoder.max_pkt_size">decoder.max_pkt_size</a> <<a href="http://decoder.max_pkt_size>">http://decoder.max_pkt_size></a>;<br>
> <<a href="http://decoder.max_pkt_size">http://decoder.max_pkt_size</a> >; |<br>
> > Decode1 | 382<br>
> > <a href="http://defrag.ipv4.fragments">defrag.ipv4.fragments</a> <<a href="http://defrag.ipv4.fragments>">http://defrag.ipv4.fragments></a>;<br>
> <<a href="http://defrag.ipv4.fragments">http://defrag.ipv4.fragments</a> >; |<br>
> > Decode1 | 0<br>
> > <a href="http://defrag.ipv4.reassembled">defrag.ipv4.reassembled</a> <<a href="http://defrag.ipv4.reassembled>">http://defrag.ipv4.reassembled></a>;<br>
> <<a href="http://defrag.ipv4.reassembled">http://defrag.ipv4.reassembled</a> >; |<br>
> > Decode1 | 0<br>
> > defrag.ipv4.timeouts | Decode1 | 0<br>
> > <a href="http://defrag.ipv6.fragments">defrag.ipv6.fragments</a> <<a href="http://defrag.ipv6.fragments>">http://defrag.ipv6.fragments></a>;<br>
> <<a href="http://defrag.ipv6.fragments">http://defrag.ipv6.fragments</a> >; |<br>
> > Decode1 | 0<br>
> > <a href="http://defrag.ipv6.reassembled">defrag.ipv6.reassembled</a> <<a href="http://defrag.ipv6.reassembled>">http://defrag.ipv6.reassembled></a>;<br>
> <<a href="http://defrag.ipv6.reassembled">http://defrag.ipv6.reassembled</a> >; |<br>
> > Decode1 | 0<br>
> > defrag.ipv6.timeouts | Decode1 | 0<br>
> > <a href="http://tcp.sessions">tcp.sessions</a> <<a href="http://tcp.sessions>">http://tcp.sessions></a>; <<a href="http://tcp.sessions">http://tcp.sessions</a> >; |<br>
> > Stream1 | 844100<br>
> > tcp.ssn_memcap_drop | Stream1 | 0<br>
> > <a href="http://tcp.pseudo">tcp.pseudo</a> <<a href="http://tcp.pseudo>">http://tcp.pseudo></a>; <<a href="http://tcp.pseudo">http://tcp.pseudo</a> >; |<br>
> > Stream1 | 0<br>
> > <a href="http://tcp.segment_memcap_drop">tcp.segment_memcap_drop</a> <<a href="http://tcp.segment_memcap_drop>">http://tcp.segment_memcap_drop></a>;<br>
> <<a href="http://tcp.segment_memcap_drop">http://tcp.segment_memcap_drop</a> >; |<br>
> > Stream1 | 0<br>
> > <a href="http://tcp.stream_depth_reached">tcp.stream_depth_reached</a> <<a href="http://tcp.stream_depth_reached>">http://tcp.stream_depth_reached></a>;<br>
> <<a href="http://tcp.stream_depth_reached">http://tcp.stream_depth_reached</a> >; |<br>
> > Stream1 | 0<br>
> > <a href="http://detect.alert">detect.alert</a> <<a href="http://detect.alert>">http://detect.alert></a>; <<a href="http://detect.alert">http://detect.alert</a> >; |<br>
> > Detect | 24768<br>
> ><br>
> > - Josh<br>
> ><br>
> > On Tuesday, March 08, 2011 04:25:04 am Victor Julien wrote:<br>
> > Thanks Luca, I guess we'll find out soon enough if ppl run into perf<br>
> > issues... I think we have results for 16 and 24 soon.<br>
> ><br>
> > Cheers,<br>
> > Victor<br>
> ><br>
> > On 03/08/2011 08:42 AM, Luca Deri wrote:<br>
> >> Victor<br>
> >> this is just a define you can increase. However note that clusters<br>
> were<br>
> >> designed to handle a few apps, thus if you increase the value to a<br>
> much<br>
> >> higher value, we better review the code and see if it is still<br>
> efficient<br>
> >> enough for your purposes<br>
> ><br>
> >> Regards Luca<br>
> ><br>
> >> On Mar 7, 2011, at 9:49 PM, Victor Julien wrote:<br>
> ><br>
> >> Thanks for figuring that out Will!<br>
> ><br>
> >> Luca, can we have a higher (or no) limit? I keep hearing stories<br>
> of ppl<br>
> >> with 24 cores :)<br>
> ><br>
> >> Cheers,<br>
> >> Victor<br>
> ><br>
> >> On 03/07/2011 07:28 PM, Will Metcalf wrote:<br>
> >>>>>> It seems that the maximum is 8 threads here. Not sure if that<br>
> can be<br>
> >>>>>> set higher if the number of slots in pfring are increased. Anyone<br>
> >>>>>> try that?<br>
> >>>>><br>
> >>>>> This value is hardset in kernel/linux/pf_ring.h<br>
> >>>>><br>
> >>>>> #define CLUSTER_LEN 8<br>
> >>>>><br>
> >>>>> Modify accordingly, maybe Luca would be kind enough to increase the<br>
> >>>>> default value?<br>
> >>>>><br>
> >>>>> Regards,<br>
> >>>>><br>
> >>>>> Will<br>
> >>>>><br>
> >>>>>> On Mon, Mar 7, 2011 at 11:47 AM, Victor Julien<br>
> >> <<a href="mailto:victor@inliniac.net">victor@inliniac.net</a> <<a href="mailto:victor@inliniac.net">mailto:victor@inliniac.net</a>><br>
> <<a href="mailto:victor@inliniac.net">mailto:victor@inliniac.net</a>>><br>
> >>>>>> wrote:<br>
> >>>>>><br>
> >>>>>> On 03/07/2011 06:15 PM, Chris Wakelin wrote:<br>
> >>>>>>> On 28/02/11 20:23, Victor Julien wrote:<br>
> >>>>>>>> Hey guys,<br>
> >>>>>>>><br>
> >>>>>>>> I know a couple of you are running PF_RING in a high speed<br>
> >>>>>>>> environment. The attached patch means to improve it's<br>
> performance.<br>
> >>>>>>>> It adds a new option called "pfring.threads" that controls the<br>
> >>>>>>>> number of reader threads the pfring code uses. I've tested<br>
> >>>>>>>> (lightly) with 1, 4 and 8 which all worked fine. There are some<br>
> >>>>>>>> more improvements, including the removal of one memcpy per<br>
> >>>>>>>> packet...<br>
> >>>>>>><br>
> >>>>>>> OK, giving it a go with 4 threads ...<br>
> >>>>>><br>
> >>>>>> It seems that the maximum is 8 threads here. Not sure if that<br>
> can be<br>
> >>>>>> set higher if the number of slots in pfring are increased. Anyone<br>
> >>>>>> try that?<br>
> >>>>>><br>
> >>>>>> Cheers,<br>
> >>>>>> Victor<br>
> >>>>>><br>
> >>>>>><br>
> >>>>>> _______________________________________________<br>
> >>>>>> Oisf-users mailing list<br>
> >>>>>> <a href="mailto:Oisf-users@openinfosecfoundation.org">Oisf-users@openinfosecfoundation.org</a><br>
> <<a href="mailto:Oisf-users@openinfosecfoundation.org">mailto:Oisf-users@openinfosecfoundation.org</a>><br>
> >> <<a href="mailto:Oisf-users@openinfosecfoundation.org">mailto:Oisf-users@openinfosecfoundation.org</a>><br>
> >>>>>> <a href="http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users">http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>
> ><br>
> >> ---<br>
> >> Keep looking, don't settle - Steve Jobs<br>
> ><br>
> _______________________________________________<br>
> Oisf-users mailing list<br>
> <a href="mailto:Oisf-users@openinfosecfoundation.org">Oisf-users@openinfosecfoundation.org</a><br>
> <<a href="mailto:Oisf-users@openinfosecfoundation.org">mailto:Oisf-users@openinfosecfoundation.org</a>><br>
> > <<a href="mailto:Oisf-users@openinfosecfoundation.org">mailto:Oisf-users@openinfosecfoundation.org</a>><br>
> <a href="http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users">http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>
> <br>
> > _______________________________________________<br>
> > Oisf-users mailing list<br>
> > <a href="mailto:Oisf-users@openinfosecfoundation.org">Oisf-users@openinfosecfoundation.org</a><br>
> <<a href="mailto:Oisf-users@openinfosecfoundation.org">mailto:Oisf-users@openinfosecfoundation.org</a>><br>
> > <a href="http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users">http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>
> <br>
> <br>
> -- <br>
> ---------------------------------------------<br>
> Victor Julien<br>
> <a href="http://www.inliniac.net">http://www.inliniac.net</a>/<br>
> PGP: <a href="http://www.inliniac.net/victorjulien.asc">http://www.inliniac.net/victorjulien.asc</a><br>
> ---------------------------------------------<br>
> <br>
> _______________________________________________<br>
> Oisf-users mailing list<br>
> <a href="mailto:Oisf-users@openinfosecfoundation.org">Oisf-users@openinfosecfoundation.org</a><br>
> <<a href="mailto:Oisf-users@openinfosecfoundation.org">mailto:Oisf-users@openinfosecfoundation.org</a>><br>
> <a href="http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users">http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>
<br>
<br>
-- <br>
---------------------------------------------<br>
Victor Julien<br>
<a href="http://www.inliniac.net">http://www.inliniac.net</a>/<br>
PGP: <a href="http://www.inliniac.net/victorjulien.asc">http://www.inliniac.net/victorjulien.asc</a><br>
---------------------------------------------<br>
<br>
</div>
</blockquote></span></body></html>