[Oisf-users] Improved PF_RING support, please test!

Victor Julien victor at inliniac.net
Wed Mar 9 15:47:29 UTC 2011


Josh, are you setting the pfring.threads option in your suricata.yaml?
It appears you have it either not set or set to 1. Set it like this:

# PF_RING configuration. for use with native PF_RING support
# for more info see http://www.ntop.org/PF_RING.html
pfring:
  # Number of receive threads (>1 will enable experimental flow pinned
  # runmode)
  threads: 8

  # Default interface we will listen on.
  interface: eth0

  # Default clusterid.  PF_RING will load balance packets based on flow.
  # All threads/processes that will participate need to have the same
  # clusterid.
  cluster-id: 99

  # Default PF_RING cluster type. PF_RING can load balance per flow or
per hash.
  # This is only supported in versions of PF_RING > 4.1.1.
  cluster-type: cluster_round_robin

If I set it to 8 pf_ring reports 8 rings...

Cheers,
Victor


On 03/09/2011 12:06 AM, jwhite at everisinc.com wrote:
> I've set it to 24, recompiled and it seems to be running ok, (as in it's
> handling 2Gbps worth of packets) but I'm unable to get the number of
> rings to report as anything but 1. Have I missed somthing or is trully
> not working correctly. I've run it based on on flow and as packet...
> still no go.
> 
> EV-SVR-006:~$ cat /proc/net/pf_ring/info
> PF_RING Version     : 4.6.0 ($Revision: exported$)
> Ring slots          : 32768
> Slot version        : 12
> Capture TX          : No [RX only]
> IP Defragment       : No
> Transparent mode    : Yes (mode 0)
> Total rings         : 1
> Total plugins       : 0
> 
> - josh
> 
> ---
> 
> Date: 3/8/2011 -- 16:52:58 (uptime: 0d, 00h 03m 21s)
> -------------------------------------------------------------------
> Counter                   | TM Name                   | Value
> -------------------------------------------------------------------
> decoder.pkts              <http://decoder.pkts             >; |
> Decode1                   | 16416387
> decoder.bytes             <http://decoder.bytes            >; |
> Decode1                   | 988040217
> decoder.ipv4              | Decode1                   | 16461043
> decoder.ipv6              | Decode1                   | 700
> decoder.ethernet          <http://decoder.ethernet         >; |
> Decode1                   | 16406387
> decoder.raw               | Decode1                   | 0
> decoder.sll               <http://decoder.sll              >; |
> Decode1                   | 0
> decoder.tcp               <http://decoder.tcp              >; |
> Decode1                   | 16406097
> decoder.udp               | Decode1                   | 1130
> decoder.icmpv4            | Decode1                   | 3
> decoder.icmpv6            | Decode1                   | 0
> decoder.ppp               | Decode1                   | 0
> decoder.pppoe             | Decode1                   | 0
> decoder.gre               <http://decoder.gre              >; |
> Decode1                   | 0
> decoder.vlan              | Decode1                   | 0
> decoder.avg_pkt_size      | Decode1                   | 60.012753
> decoder.max_pkt_size      <http://decoder.max_pkt_size     >; |
> Decode1                   | 382
> defrag.ipv4.fragments     <http://defrag.ipv4.fragments    >; |
> Decode1                   | 0
> defrag.ipv4.reassembled   <http://defrag.ipv4.reassembled  >; |
> Decode1                   | 0
> defrag.ipv4.timeouts      | Decode1                   | 0
> defrag.ipv6.fragments     <http://defrag.ipv6.fragments    >; |
> Decode1                   | 0
> defrag.ipv6.reassembled   <http://defrag.ipv6.reassembled  >; |
> Decode1                   | 0
> defrag.ipv6.timeouts      | Decode1                   | 0
> tcp.sessions              <http://tcp.sessions             >; |
> Stream1                   | 844100
> tcp.ssn_memcap_drop       | Stream1                   | 0
> tcp.pseudo                <http://tcp.pseudo               >; |
> Stream1                   | 0
> tcp.segment_memcap_drop   <http://tcp.segment_memcap_drop  >; |
> Stream1                   | 0
> tcp.stream_depth_reached  <http://tcp.stream_depth_reached >; |
> Stream1                   | 0
> detect.alert              <http://detect.alert             >; |
> Detect                    | 24768
> 
> - Josh
> 
> On Tuesday, March 08, 2011 04:25:04 am Victor Julien wrote:
> Thanks Luca, I guess we'll find out soon enough if ppl run into perf
> issues... I think we have results for 16 and 24 soon.
> 
> Cheers,
> Victor
> 
> On 03/08/2011 08:42 AM, Luca Deri wrote:
>> Victor
>> this is just a define you can increase. However note that clusters were
>> designed to handle a few apps, thus if you increase the value to a much
>> higher value, we better review the code and see if it is still efficient
>> enough for your purposes
> 
>> Regards Luca
> 
>> On Mar 7, 2011, at 9:49 PM, Victor Julien wrote:
> 
>> Thanks for figuring that out Will!
> 
>> Luca, can we have a higher  (or no) limit? I keep hearing stories of ppl
>> with 24 cores :)
> 
>> Cheers,
>> Victor
> 
>> On 03/07/2011 07:28 PM, Will Metcalf wrote:
>>>>>> It seems that the maximum is 8 threads here. Not sure if that can be
>>>>>> set higher if the number of slots in pfring are increased. Anyone
>>>>>> try that?
>>>>>
>>>>> This value is hardset in kernel/linux/pf_ring.h
>>>>>
>>>>> #define CLUSTER_LEN 8
>>>>>
>>>>> Modify accordingly, maybe Luca would be kind enough to increase the
>>>>> default value?
>>>>>
>>>>> Regards,
>>>>>
>>>>> Will
>>>>>
>>>>>> On Mon, Mar 7, 2011 at 11:47 AM, Victor Julien
>> <victor at inliniac.net <mailto:victor at inliniac.net>>
>>>>>> wrote:
>>>>>>
>>>>>> On 03/07/2011 06:15 PM, Chris Wakelin wrote:
>>>>>>> On 28/02/11 20:23, Victor Julien wrote:
>>>>>>>> Hey guys,
>>>>>>>>
>>>>>>>> I know a couple of you are running PF_RING in a high speed
>>>>>>>> environment. The attached patch means to improve it's performance.
>>>>>>>> It adds a new option called "pfring.threads" that controls the
>>>>>>>> number of reader threads the pfring code uses. I've tested
>>>>>>>> (lightly) with 1, 4 and 8 which all worked fine. There are some
>>>>>>>> more improvements, including the removal of one memcpy per
>>>>>>>> packet...
>>>>>>>
>>>>>>> OK, giving it a go with 4 threads ...
>>>>>>
>>>>>> It seems that the maximum is 8 threads here. Not sure if that can be
>>>>>> set higher if the number of slots in pfring are increased. Anyone
>>>>>> try that?
>>>>>>
>>>>>> Cheers,
>>>>>> Victor
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> Oisf-users mailing list
>>>>>> Oisf-users at openinfosecfoundation.org
>> <mailto:Oisf-users at openinfosecfoundation.org>
>>>>>> http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
> 
>> ---
>> Keep looking, don't settle - Steve Jobs
> 
_______________________________________________
Oisf-users mailing list
Oisf-users at openinfosecfoundation.org
> <mailto:Oisf-users at openinfosecfoundation.org>
http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users

> _______________________________________________
> Oisf-users mailing list
> Oisf-users at openinfosecfoundation.org
> http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users


-- 
---------------------------------------------
Victor Julien
http://www.inliniac.net/
PGP: http://www.inliniac.net/victorjulien.asc
---------------------------------------------




More information about the Oisf-users mailing list