[Oisf-devel] <Error> (ReceivePfring) -- [ERRCODE: SC_ERR_PF_RING_RECV(31)] - pfring_recv error -1

Peter Manev petermanev at gmail.com
Thu Aug 4 16:21:54 UTC 2011


On Thu, Aug 4, 2011 at 5:54 PM, Will Metcalf <william.metcalf at gmail.com>wrote:

> Yes or as Peter mentioned VLAN headers... 1518 seems like a better
> default imho taking these two things into account.  Or heck what about
> 1522 to account for both 802.1q headers and FCS?
>
> Regards,
>
> Will
>
> On Thu, Aug 4, 2011 at 10:43 AM, Will Metcalf <william.metcalf at gmail.com>
> wrote:
> > Heh... It looks like there might be off-by-one error based on the
> > allocated buffer and the macro GET_PKT_DIRECT_MAX_SIZE() which is used
> > to determine how much data to copy from PF_RING.  If I'm reading the
> > code correctly it looks like on a fully loaded frame cuts off the last
> > byte in a payload.  Until we can come up wit a patch/further validate
> > try setting your default-packet-size: setting in your suricata.yaml to
> > one more byte than the maximum on-wire size of a frame.  So for
> > example MTU + ethernet header = 1514 bytes set to 1515 bytes or if you
> > want to account for  FCS transmitted on wire set it to MTU + ethernet
> > header + FCS = 1518 set to 1519 bytes.
> >
> > Additionally since there is no guarantee that FCS won't be included in
> > a frame Suricata should account for this, by making the default
> > snaplen 1518 bytes imho. Thoughts?
> >
> > http://wiki.wireshark.org/Ethernet
> >
> > Regards,
> >
> > Will
> >
> >
> > On Thu, Aug 4, 2011 at 10:06 AM,  <David.R.Wharton at regions.com> wrote:
> >> Thanks Will.  I installed Suricata version 1.1beta2 (rev b3f7e6a) from
> git
> >> and now I don't get the PF_RING errors.  Now I get tons of App Layer
> parser
> >> errors, similar to the following, mostly on SSL/TLS connections but I
> also
> >> see it on http and smtp 'app layer protocol':
> >>
> >> [4640] 4/8/2011 -- 09:56:38 - (app-layer-parser.c:955) <Error>
> >> (AppLayerParse) -- [ERRCODE: SC_ERR_ALPARSER(59)] - Error occured in
> parsing
> >> "tls" app layer protocol, using network protocol 6, source IP address
> >> 66.255.199.50, destination IP address <removed>, src port 34481 and dst
> port
> >> 443
> >> [4640] 4/8/2011 -- 09:56:38 - (app-layer-parser.c:955) <Error>
> >> (AppLayerParse) -- [ERRCODE: SC_ERR_ALPARSER(59)] - Error occured in
> parsing
> >> "tls" app layer protocol, using network protocol 6, source IP address
> >> 153.69.201.240, destination IP address <removed>, src port 7132 and dst
> port
> >> 443
> >> [4640] 4/8/2011 -- 09:56:38 - (app-layer-parser.c:955) <Error>
> >> (AppLayerParse) -- [ERRCODE: SC_ERR_ALPARSER(59)] - Error occured in
> parsing
> >> "http" app layer protocol, using network protocol 6, source IP address
> >> <removed>, destination IP address 68.147.232.208, src port 53771 and dst
> >> port 80
> >>
> >> Thanks.
> >>
> >> -David
> >>
> >>
> >>
> >> From:        Will Metcalf <william.metcalf at gmail.com>
> >> To:        David.R.Wharton at regions.com
> >> Cc:        oisf-devel at openinfosecfoundation.org
> >> Date:        08/03/2011 04:35 PM
> >> Subject:        Re: [Oisf-devel] <Error> (ReceivePfring) -- [ERRCODE:
> >> SC_ERR_PF_RING_RECV(31)] - pfring_recv error -1
> >> ________________________________
> >>
> >>
> >> You need to upgrade to the latest suricata version from git. Packets
> >> are now passed as a reference in PF_RING 4.7.1, which required us to
> >> modify suri.
> >>
> >> Regards,
> >>
> >> Will
> >> On Wed, Aug 3, 2011 at 4:30 PM,  <David.R.Wharton at regions.com> wrote:
> >>> I'm trying to get Suricata up and running with PF_RING but I keep
> getting
> >>> a
> >>> pfring_recv error.  Here is a snipped from when Suricata starts up:
> >>>
> >>> [13373] 3/8/2011 -- 16:25:22 - (source-pfring.c:313) <Info>
> >>> (ReceivePfringThreadInit) -- (ReceivePfring) Using PF_RING v.4.7.1,
> >>> interface eth2, cluster-id 99
> >>> [13354] 3/8/2011 -- 16:25:23 - (tm-threads.c:1485) <Info>
> >>> (TmThreadWaitOnThreadInit) -- all 11 packet processing threads, 3
> >>> management
> >>> threads initialized, engine started.
> >>> [13373] 3/8/2011 -- 16:25:23 - (source-pfring.c:232) <Error>
> >>> (ReceivePfring)
> >>> -- [ERRCODE: SC_ERR_PF_RING_RECV(31)] - pfring_recv error  -1
> >>> [13373] 3/8/2011 -- 16:25:23 - (source-pfring.c:332) <Info>
> >>> (ReceivePfringThreadExitStats) -- (ReceivePfring) Packets 0, bytes 0
> >>> [13373] 3/8/2011 -- 16:25:23 - (source-pfring.c:336) <Info>
> >>> (ReceivePfringThreadExitStats) -- (ReceivePfring) Pfring Total:0 Recv:0
> >>> Drop:0 (nan%).
> >>> [13354] 3/8/2011 -- 16:25:24 - (tm-threads.c:1400) <Info>
> >>> (TmThreadRestartThread) -- thread "ReceivePfring" restarted
> >>> [13387] 3/8/2011 -- 16:25:24 - (source-pfring.c:313) <Info>
> >>> (ReceivePfringThreadInit) -- (ReceivePfring) Using PF_RING v.4.7.1,
> >>> interface eth2, cluster-id 99
> >>> [13387] 3/8/2011 -- 16:25:24 - (source-pfring.c:232) <Error>
> >>> (ReceivePfring)
> >>> -- [ERRCODE: SC_ERR_PF_RING_RECV(31)] - pfring_recv error  -1
> >>> [13387] 3/8/2011 -- 16:25:24 - (source-pfring.c:332) <Info>
> >>> (ReceivePfringThreadExitStats) -- (ReceivePfring) Packets 0, bytes 0
> >>> [13387] 3/8/2011 -- 16:25:24 - (source-pfring.c:336) <Info>
> >>> (ReceivePfringThreadExitStats) -- (ReceivePfring) Pfring Total:0 Recv:0
> >>> Drop:0 (nan%).
> >>> [13354] 3/8/2011 -- 16:25:24 - (tm-threads.c:1400) <Info>
> >>> (TmThreadRestartThread) -- thread "ReceivePfring" restarted
> >>> [13388] 3/8/2011 -- 16:25:24 - (source-pfring.c:313) <Info>
> >>> (ReceivePfringThreadInit) -- (ReceivePfring) Using PF_RING v.4.7.1,
> >>> interface eth2, cluster-id 99
> >>> [13388] 3/8/2011 -- 16:25:24 - (source-pfring.c:232) <Error>
> >>> (ReceivePfring)
> >>> -- [ERRCODE: SC_ERR_PF_RING_RECV(31)] - pfring_recv error  -1
> >>> [13388] 3/8/2011 -- 16:25:24 - (source-pfring.c:332) <Info>
> >>> (ReceivePfringThreadExitStats) -- (ReceivePfring) Packets 0, bytes 0
> >>> [13388] 3/8/2011 -- 16:25:24 - (source-pfring.c:336) <Info>
> >>> (ReceivePfringThreadExitStats) -- (ReceivePfring) Pfring Total:0 Recv:0
> >>> Drop:0 (nan%).
> >>> [13354] 3/8/2011 -- 16:25:24 - (tm-threads.c:1400) <Info>
> >>> (TmThreadRestartThread) -- thread "ReceivePfring" restarted
> >>> [13389] 3/8/2011 -- 16:25:24 - (source-pfring.c:313) <Info>
> >>> (ReceivePfringThreadInit) -- (ReceivePfring) Using PF_RING v.4.7.1,
> >>> interface eth2, cluster-id 99
> >>> [13389] 3/8/2011 -- 16:25:24 - (source-pfring.c:232) <Error>
> >>> (ReceivePfring)
> >>> -- [ERRCODE: SC_ERR_PF_RING_RECV(31)] - pfring_recv error  -1
> >>> [13389] 3/8/2011 -- 16:25:24 - (source-pfring.c:332) <Info>
> >>> (ReceivePfringThreadExitStats) -- (ReceivePfring) Packets 0, bytes 0
> >>> [13389] 3/8/2011 -- 16:25:24 - (source-pfring.c:336) <Info>
> >>> (ReceivePfringThreadExitStats) -- (ReceivePfring) Pfring Total:0 Recv:0
> >>> Drop:0 (nan%).
> >>> [13354] 3/8/2011 -- 16:25:24 - (tm-threads.c:1400) <Info>
> >>> (TmThreadRestartThread) -- thread "ReceivePfring" restarted
> >>> [13390] 3/8/2011 -- 16:25:24 - (source-pfring.c:313) <Info>
> >>> (ReceivePfringThreadInit) -- (ReceivePfring) Using PF_RING v.4.7.1,
> >>> interface eth2, cluster-id 99
> >>> [13390] 3/8/2011 -- 16:25:24 - (source-pfring.c:232) <Error>
> >>> (ReceivePfring)
> >>> -- [ERRCODE: SC_ERR_PF_RING_RECV(31)] - pfring_recv error  -1
> >>> [13390] 3/8/2011 -- 16:25:24 - (source-pfring.c:332) <Info>
> >>> (ReceivePfringThreadExitStats) -- (ReceivePfring) Packets 0, bytes 0
> >>> [13390] 3/8/2011 -- 16:25:24 - (source-pfring.c:336) <Info>
> >>> (ReceivePfringThreadExitStats) -- (ReceivePfring) Pfring Total:0 Recv:0
> >>> Drop:0 (nan%).
> >>> [13354] 3/8/2011 -- 16:25:24 - (tm-threads.c:1400) <Info>
> >>> (TmThreadRestartThread) -- thread "ReceivePfring" restarted
> >>> [13391] 3/8/2011 -- 16:25:24 - (source-pfring.c:313) <Info>
> >>> (ReceivePfringThreadInit) -- (ReceivePfring) Using PF_RING v.4.7.1,
> >>> interface eth2, cluster-id 99
> >>> [13391] 3/8/2011 -- 16:25:24 - (source-pfring.c:232) <Error>
> >>> (ReceivePfring)
> >>> -- [ERRCODE: SC_ERR_PF_RING_RECV(31)] - pfring_recv error  -1
> >>> [13391] 3/8/2011 -- 16:25:24 - (source-pfring.c:332) <Info>
> >>> (ReceivePfringThreadExitStats) -- (ReceivePfring) Packets 0, bytes 0
> >>> [13391] 3/8/2011 -- 16:25:24 - (source-pfring.c:336) <Info>
> >>> (ReceivePfringThreadExitStats) -- (ReceivePfring) Pfring Total:0 Recv:0
> >>> Drop:0 (nan%).
> >>> [13354] 3/8/2011 -- 16:25:24 - (tm-threads.c:1400) <Info>
> >>> (TmThreadRestartThread) -- thread "ReceivePfring" restarted
> >>> [13392] 3/8/2011 -- 16:25:24 - (source-pfring.c:313) <Info>
> >>> (ReceivePfringThreadInit) -- (ReceivePfring) Using PF_RING v.4.7.1,
> >>> interface eth2, cluster-id 99
> >>> [13392] 3/8/2011 -- 16:25:24 - (source-pfring.c:232) <Error>
> >>> (ReceivePfring)
> >>> -- [ERRCODE: SC_ERR_PF_RING_RECV(31)] - pfring_recv error  -1
> >>> [13392] 3/8/2011 -- 16:25:24 - (source-pfring.c:332) <Info>
> >>> (ReceivePfringThreadExitStats) -- (ReceivePfring) Packets 0, bytes 0
> >>> [13392] 3/8/2011 -- 16:25:24 - (source-pfring.c:336) <Info>
> >>> (ReceivePfringThreadExitStats) -- (ReceivePfring) Pfring Total:0 Recv:0
> >>> Drop:0 (nan%).
> >>> [13354] 3/8/2011 -- 16:25:24 - (tm-threads.c:1400) <Info>
> >>> (TmThreadRestartThread) -- thread "ReceivePfring" restarted
> >>> [13393] 3/8/2011 -- 16:25:25 - (source-pfring.c:313) <Info>
> >>> (ReceivePfringThreadInit) -- (ReceivePfring) Using PF_RING v.4.7.1,
> >>> interface eth2, cluster-id 99
> >>> [13393] 3/8/2011 -- 16:25:25 - (source-pfring.c:232) <Error>
> >>> (ReceivePfring)
> >>> -- [ERRCODE: SC_ERR_PF_RING_RECV(31)] - pfring_recv error  -1
> >>> [13393] 3/8/2011 -- 16:25:25 - (source-pfring.c:332) <Info>
> >>> (ReceivePfringThreadExitStats) -- (ReceivePfring) Packets 0, bytes 0
> >>> [13393] 3/8/2011 -- 16:25:25 - (source-pfring.c:336) <Info>
> >>> (ReceivePfringThreadExitStats) -- (ReceivePfring) Pfring Total:0 Recv:0
> >>> Drop:0 (nan%).
> >>> [13354] 3/8/2011 -- 16:25:25 - (tm-threads.c:1400) <Info>
> >>> (TmThreadRestartThread) -- thread "ReceivePfring" restarted
> >>> [13395] 3/8/2011 -- 16:25:25 - (source-pfring.c:307) <Error>
> >>> (ReceivePfringThreadInit) -- [ERRCODE:
> >>> SC_ERR_PF_RING_SET_CLUSTER_FAILED(37)] - pfring_set_cluster returned -1
> >>> for
> >>> cluster-id: 99
> >>> [13354] 3/8/2011 -- 16:25:25 - (suricata.c:1363) <Info> (main) --
> signal
> >>> received
> >>> [13354] 3/8/2011 -- 16:25:25 - (suricata.c:1414) <Info> (main) -- time
> >>> elapsed 3s
> >>> [13384] 3/8/2011 -- 16:25:25 - (flow.c:1142) <Info> (FlowManagerThread)
> --
> >>> 0
> >>> new flows, 0 established flows were timed out, 0 flows in closed state
> >>> [13354] 3/8/2011 -- 16:25:25 - (stream-tcp-reassemble.c:352) <Info>
> >>> (StreamTcpReassembleFree) -- Max memuse of the stream reassembly engine
> >>> 11220864 (in use 0)
> >>> [13354] 3/8/2011 -- 16:25:25 - (stream-tcp.c:495) <Info>
> >>> (StreamTcpFreeConfig) -- Max memuse of stream engine 4063232 (in use 0)
> >>> [13354] 3/8/2011 -- 16:25:26 - (detect.c:3403) <Info>
> >>> (SigAddressCleanupStage1) -- cleaning up signature grouping
> structure...
> >>> complete
> >>>
> >>> I am running PF_RING 4.7.1 ($Revision: 4753$) and Suricata version
> >>> 1.1beta2.
> >>>
> >>> PF_RING seems to be installed OK and I can run the pfcount program just
> >>> fine:
> >>>
> >>> # cat /proc/net/pf_ring/info
> >>> PF_RING Version     : 4.7.1 ($Revision: 4753$)
> >>> Ring slots          : 4096
> >>> Slot version        : 13
> >>> Capture TX          : Yes [RX+TX]
> >>> IP Defragment       : No
> >>> Socket Mode         : Standard
> >>> Transparent mode    : Yes (mode 0)
> >>> Total rings         : 0
> >>> Total plugins       : 0
> >>>
> >>>
> >>> # ./pfcount -i eth2
> >>> Using PF_RING v.4.7.1
> >>> Capturing from eth2 [00:1B:78:31:F1:A4]
> >>> # Device RX channels: 1
> >>> # Polling threads:    1
> >>> =========================
> >>> Absolute Stats: [49859 pkts rcvd][0 pkts dropped]
> >>> Total Pkts=49859/Dropped=0.0 %
> >>> 49'859 pkts - 28'713'541 bytes
> >>> =========================
> >>>
> >>> =========================
> >>> Absolute Stats: [102158 pkts rcvd][0 pkts dropped]
> >>> Total Pkts=102158/Dropped=0.0 %
> >>> 102'158 pkts - 59'531'866 bytes [101'959.38 pkt/sec - 475.33 Mbit/sec]
> >>> =========================
> >>> Actual Stats: 52299 pkts [1'001.94 ms][52'197.37 pkt/sec]
> >>> =========================
> >>>
> >>>
> >>> Any ideas?
> >>>
> >>> Thanks.
> >>>
> >>> -David
> >>>
> >>>
> >>> _______________________________________________
> >>> Oisf-devel mailing list
> >>> Oisf-devel at openinfosecfoundation.org
> >>> http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-devel
> >>>
> >>>
> >>
> >>
> >
> _______________________________________________
> Oisf-devel mailing list
> Oisf-devel at openinfosecfoundation.org
> http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-devel
>

yes,
1522 would be the best.
Linx default MTU is 1500 I think....


-- 
Peter Manev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-devel/attachments/20110804/45bcfc37/attachment-0002.html>


More information about the Oisf-devel mailing list