[Oisf-users] Packet Loss
Yasha Zislin
coolyasha at hotmail.com
Mon Jun 9 16:24:55 UTC 2014
Sorry forgot to include that.
# ethtool -g eth15
Ring parameters for eth15:
Pre-set maximums:
RX: 8192
RX Mini: 0
RX Jumbo: 1024
TX: 1024
Current hardware settings:
RX: 8192
RX Mini: 0
RX Jumbo: 128
TX: 1024
Output is the same for eth17.
I've changed RX values from default to maximum. It was small.
> Date: Mon, 9 Jun 2014 18:22:32 +0200
> Subject: Re: [Oisf-users] Packet Loss
> From: petermanev at gmail.com
> To: coolyasha at hotmail.com
> CC: oisf-users at lists.openinfosecfoundation.org
>
> On Mon, Jun 9, 2014 at 5:44 PM, Yasha Zislin <coolyasha at hotmail.com> wrote:
> > I've done some additional testing.
> >
> > I've ran pfcount with 16 threads with the same parameters as Suricata does.
> > I've had only one instance of /proc/net/pf_ring instantiated but 16 threads
> > in processes (TOP -H).
> >
> > I've been running it for an hour with 0 packet loss. PF_RING slot usage does
> > not go above 200 (with 688k total).
> >
> > So my packet loss occurs due to Suricata and not network/pf_ring related.
> >
> > Thanks.
>
>
> What is the output of:
>
> ethtool -g eth15
> and
> ethtool -g eth17
>
>
> >
> >> Date: Mon, 9 Jun 2014 16:30:25 +0200
> >
> >> Subject: Re: [Oisf-users] Packet Loss
> >> From: petermanev at gmail.com
> >> To: coolyasha at hotmail.com; oisf-users at lists.openinfosecfoundation.org
> >
> >>
> >> On Mon, Jun 9, 2014 at 4:21 PM, Yasha Zislin <coolyasha at hotmail.com>
> >> wrote:
> >> > So these two interfaces are kind of related. If traffic spikes on one,
> >> > it
> >> > will increase on the other.
> >> > In fact, these two span ports see traffic before and after the firewall.
> >> > So
> >> > traffic is the same except for NATting piece. After the firewall IPs
> >> > change
> >> > to private addressing.
> >> > Not sure if this scenario can have issues, BUT my packet drop was
> >> > occurring
> >> > when I was just trying to get Suricata to run with one interface.
> >> > Important point is that CPUs dont hit 100% (it was doing that in the
> >> > beginning until I started to offload everything to RAM).
> >> >
> >> > You are correct about memory consumption of threads. I've increased
> >> > min_num_slots by running this:
> >> > modprobe pf_ring transparent_mode=0 min_num_slots=400000
> >> > enable_tx_capture=0
> >> >
> >> > cat /proc/net/pf_ring/info
> >> > PF_RING Version : 6.0.2 ($Revision: exported$)
> >> > Total rings : 16
> >> >
> >> > Standard (non DNA) Options
> >> > Ring slots : 400000
> >> > Slot version : 15
> >> > Capture TX : No [RX only]
> >> > IP Defragment : No
> >> > Socket Mode : Standard
> >> > Transparent mode : Yes [mode 0]
> >> > Total plugins : 0
> >> > Cluster Fragment Queue : 3852
> >> > Cluster Fragment Discard : 428212
> >> >
> >> > I could have increased it higher (it let me do that) but Free Num Slots
> >> > stopped increased after 400000 value.
> >> > I did notice if I set min_num_slots to default 65k number, Free Num
> >> > Slots
> >> > get to 0 faster and packet drop begins.
> >> >
> >> > Here is a stat for one of the threads:
> >> > cat /proc/net/pf_ring/6224-eth17.595
> >> > Bound Device(s) : eth17
> >> > Active : 1
> >> > Breed : Non-DNA
> >> > Sampling Rate : 1
> >> > Capture Direction : RX+TX
> >> > Socket Mode : RX+TX
> >> > Appl. Name : Suricata
> >> > IP Defragment : No
> >> > BPF Filtering : Disabled
> >> > # Sw Filt. Rules : 0
> >> > # Hw Filt. Rules : 0
> >> > Poll Pkt Watermark : 128
> >> > Num Poll Calls : 6408432
> >> > Channel Id Mask : 0xFFFFFFFF
> >> > Cluster Id : 99
> >> > Slot Version : 15 [6.0.2]
> >> > Min Num Slots : 688290
> >> > Bucket Len : 1522
> >> > Slot Len : 1560 [bucket+header]
> >> > Tot Memory : 1073741824
> >> > Tot Packets : 902405618
> >> > Tot Pkt Lost : 79757335
> >> > Tot Insert : 822648289
> >> > Tot Read : 822648236
> >> > Insert Offset : 219035272
> >> > Remove Offset : 218997656
> >> > TX: Send Ok : 0
> >> > TX: Send Errors : 0
> >> > Reflect: Fwd Ok : 0
> >> > Reflect: Fwd Errors: 0
> >> > Num Free Slots : 688237
> >> >
> >> >
> >> > For NICs I have 10 gig Fiber HP nic (I think with Qlogic chip).
> >> >
> >> > BTW, I had to configure both PF_RING interfaces to be the same cluster
> >> > ID.
> >> > For some reason, setting them to different numbers would not work.
> >> >
> >> > You are correct about the behavior. It runs fine, Free Num Slots for ALL
> >> > threads get to 0, packet drop starts, after some time Free Num Slots go
> >> > back
> >> > to almost 100% available and packet drop stops.
> >> > It feels like it is getting choked on something and starts to fill up.
> >> >
> >> > Timeout values are as follows:
> >> > flow-timeouts:
> >> >
> >> > default:
> >> > new: 3
> >> > established: 30
> >> > closed: 0
> >> > emergency-new: 10
> >> > emergency-established: 10
> >> > emergency-closed: 0
> >> > tcp:
> >> > new: 6
> >> > established: 100
> >> > closed: 12
> >> > emergency-new: 1
> >> > emergency-established: 5
> >> > emergency-closed: 2
> >> > udp:
> >> > new: 3
> >> > established: 30
> >> > emergency-new: 3
> >> > emergency-established: 10
> >> > icmp:
> >> > new: 3
> >> > established: 30
> >> > emergency-new: 1
> >> > emergency-established: 10
> >> >
> >> > I set them just like one of these "10gig and beyond" articles said.
> >> >
> >> > Thank you for your help.
> >> >
> >>
> >> Thank you for the feedback.
> >> Please keep the conversation on the list :)
> >>
> >>
> >> --
> >> Regards,
> >> Peter Manev
>
>
>
> --
> Regards,
> Peter Manev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20140609/f28d2535/attachment-0002.html>
More information about the Oisf-users
mailing list