<html>
<head>
<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 12pt;
font-family:Calibri
}
--></style></head>
<body class='hmmessage'><div dir='ltr'>Sorry forgot to include that.<br><br># ethtool -g eth15<br>Ring parameters for eth15:<br>Pre-set maximums:<br>RX: 8192<br>RX Mini: 0<br>RX Jumbo: 1024<br>TX: 1024<br>Current hardware settings:<br>RX: 8192<br>RX Mini: 0<br>RX Jumbo: 128<br>TX: 1024<br><br>Output is the same for eth17.<br><br>I've changed RX values from default to maximum. It was small.<br><br><div>> Date: Mon, 9 Jun 2014 18:22:32 +0200<br>> Subject: Re: [Oisf-users] Packet Loss<br>> From: petermanev@gmail.com<br>> To: coolyasha@hotmail.com<br>> CC: oisf-users@lists.openinfosecfoundation.org<br>> <br>> On Mon, Jun 9, 2014 at 5:44 PM, Yasha Zislin <coolyasha@hotmail.com> wrote:<br>> > I've done some additional testing.<br>> ><br>> > I've ran pfcount with 16 threads with the same parameters as Suricata does.<br>> > I've had only one instance of /proc/net/pf_ring instantiated but 16 threads<br>> > in processes (TOP -H).<br>> ><br>> > I've been running it for an hour with 0 packet loss. PF_RING slot usage does<br>> > not go above 200 (with 688k total).<br>> ><br>> > So my packet loss occurs due to Suricata and not network/pf_ring related.<br>> ><br>> > Thanks.<br>> <br>> <br>> What is the output of:<br>> <br>> ethtool -g eth15<br>> and<br>> ethtool -g eth17<br>> <br>> <br>> ><br>> >> Date: Mon, 9 Jun 2014 16:30:25 +0200<br>> ><br>> >> Subject: Re: [Oisf-users] Packet Loss<br>> >> From: petermanev@gmail.com<br>> >> To: coolyasha@hotmail.com; oisf-users@lists.openinfosecfoundation.org<br>> ><br>> >><br>> >> On Mon, Jun 9, 2014 at 4:21 PM, Yasha Zislin <coolyasha@hotmail.com><br>> >> wrote:<br>> >> > So these two interfaces are kind of related. If traffic spikes on one,<br>> >> > it<br>> >> > will increase on the other.<br>> >> > In fact, these two span ports see traffic before and after the firewall.<br>> >> > So<br>> >> > traffic is the same except for NATting piece. After the firewall IPs<br>> >> > change<br>> >> > to private addressing.<br>> >> > Not sure if this scenario can have issues, BUT my packet drop was<br>> >> > occurring<br>> >> > when I was just trying to get Suricata to run with one interface.<br>> >> > Important point is that CPUs dont hit 100% (it was doing that in the<br>> >> > beginning until I started to offload everything to RAM).<br>> >> ><br>> >> > You are correct about memory consumption of threads. I've increased<br>> >> > min_num_slots by running this:<br>> >> > modprobe pf_ring transparent_mode=0 min_num_slots=400000<br>> >> > enable_tx_capture=0<br>> >> ><br>> >> > cat /proc/net/pf_ring/info<br>> >> > PF_RING Version : 6.0.2 ($Revision: exported$)<br>> >> > Total rings : 16<br>> >> ><br>> >> > Standard (non DNA) Options<br>> >> > Ring slots : 400000<br>> >> > Slot version : 15<br>> >> > Capture TX : No [RX only]<br>> >> > IP Defragment : No<br>> >> > Socket Mode : Standard<br>> >> > Transparent mode : Yes [mode 0]<br>> >> > Total plugins : 0<br>> >> > Cluster Fragment Queue : 3852<br>> >> > Cluster Fragment Discard : 428212<br>> >> ><br>> >> > I could have increased it higher (it let me do that) but Free Num Slots<br>> >> > stopped increased after 400000 value.<br>> >> > I did notice if I set min_num_slots to default 65k number, Free Num<br>> >> > Slots<br>> >> > get to 0 faster and packet drop begins.<br>> >> ><br>> >> > Here is a stat for one of the threads:<br>> >> > cat /proc/net/pf_ring/6224-eth17.595<br>> >> > Bound Device(s) : eth17<br>> >> > Active : 1<br>> >> > Breed : Non-DNA<br>> >> > Sampling Rate : 1<br>> >> > Capture Direction : RX+TX<br>> >> > Socket Mode : RX+TX<br>> >> > Appl. Name : Suricata<br>> >> > IP Defragment : No<br>> >> > BPF Filtering : Disabled<br>> >> > # Sw Filt. Rules : 0<br>> >> > # Hw Filt. Rules : 0<br>> >> > Poll Pkt Watermark : 128<br>> >> > Num Poll Calls : 6408432<br>> >> > Channel Id Mask : 0xFFFFFFFF<br>> >> > Cluster Id : 99<br>> >> > Slot Version : 15 [6.0.2]<br>> >> > Min Num Slots : 688290<br>> >> > Bucket Len : 1522<br>> >> > Slot Len : 1560 [bucket+header]<br>> >> > Tot Memory : 1073741824<br>> >> > Tot Packets : 902405618<br>> >> > Tot Pkt Lost : 79757335<br>> >> > Tot Insert : 822648289<br>> >> > Tot Read : 822648236<br>> >> > Insert Offset : 219035272<br>> >> > Remove Offset : 218997656<br>> >> > TX: Send Ok : 0<br>> >> > TX: Send Errors : 0<br>> >> > Reflect: Fwd Ok : 0<br>> >> > Reflect: Fwd Errors: 0<br>> >> > Num Free Slots : 688237<br>> >> ><br>> >> ><br>> >> > For NICs I have 10 gig Fiber HP nic (I think with Qlogic chip).<br>> >> ><br>> >> > BTW, I had to configure both PF_RING interfaces to be the same cluster<br>> >> > ID.<br>> >> > For some reason, setting them to different numbers would not work.<br>> >> ><br>> >> > You are correct about the behavior. It runs fine, Free Num Slots for ALL<br>> >> > threads get to 0, packet drop starts, after some time Free Num Slots go<br>> >> > back<br>> >> > to almost 100% available and packet drop stops.<br>> >> > It feels like it is getting choked on something and starts to fill up.<br>> >> ><br>> >> > Timeout values are as follows:<br>> >> > flow-timeouts:<br>> >> ><br>> >> > default:<br>> >> > new: 3<br>> >> > established: 30<br>> >> > closed: 0<br>> >> > emergency-new: 10<br>> >> > emergency-established: 10<br>> >> > emergency-closed: 0<br>> >> > tcp:<br>> >> > new: 6<br>> >> > established: 100<br>> >> > closed: 12<br>> >> > emergency-new: 1<br>> >> > emergency-established: 5<br>> >> > emergency-closed: 2<br>> >> > udp:<br>> >> > new: 3<br>> >> > established: 30<br>> >> > emergency-new: 3<br>> >> > emergency-established: 10<br>> >> > icmp:<br>> >> > new: 3<br>> >> > established: 30<br>> >> > emergency-new: 1<br>> >> > emergency-established: 10<br>> >> ><br>> >> > I set them just like one of these "10gig and beyond" articles said.<br>> >> ><br>> >> > Thank you for your help.<br>> >> ><br>> >><br>> >> Thank you for the feedback.<br>> >> Please keep the conversation on the list :)<br>> >><br>> >><br>> >> --<br>> >> Regards,<br>> >> Peter Manev<br>> <br>> <br>> <br>> -- <br>> Regards,<br>> Peter Manev<br></div> </div></body>
</html>