[Oisf-users] Zero packets captured with suricata 2.0.4+PFRING 6.0.2

C. L. Martinez carlopmart at gmail.com
Mon Oct 13 06:50:48 UTC 2014


On Fri, Oct 10, 2014 at 1:44 PM, Peter Manev <petermanev at gmail.com> wrote:
>
> Hi,
> Could you please share the output of :
>
> 1)
> modinfo pf_ring && cat /proc/net/pf_ring/info
>
> 2)
> pfring section in your suricata.yaml
>
> 3)
> suricata --build-info
> ?
>

Sorry for the delay.

Here, the answers:

a)
modinfo pf_ring:

filename:
/lib/modules/2.6.32-431.29.2.el6.x86_64/kernel/net/pf_ring/pf_ring.ko
alias:          net-pf-27
description:    Packet capture acceleration and analysis
author:         Luca Deri <deri at ntop.org>
license:        GPL
srcversion:     CE1D96764C8F88915343823
depends:
vermagic:       2.6.32-431.29.2.el6.x86_64 SMP mod_unload modversions
parm:           min_num_slots:Min number of ring slots (uint)
parm:           perfect_rules_hash_size:Perfect rules hash size (uint)
parm:           transparent_mode:0=standard Linux,
1=direct2pfring+transparent, 2=direct2pfring+non transparentFor 1 and
2 you need to use a PF_RING aware driver (uint)
parm:           enable_debug:Set to 1 to enable PF_RING debug tracing
into the syslog (uint)
parm:           enable_tx_capture:Set to 1 to capture outgoing packets (uint)
parm:           enable_frag_coherence:Set to 1 to handle fragments
(flow coherence) in clusters (uint)
parm:           enable_ip_defrag:Set to 1 to enable IP
defragmentation(only rx traffic is defragmentead) (uint)
parm:           quick_mode:Set to 1 to run at full speed but with upto
one socket per interface (uint)

cat /proc/net/pf_rig/info
PF_RING Version          : 6.0.2 ($Revision: $)
Total rings              : 1

Standard (non DNA) Options
Ring slots               : 65534
Slot version             : 16
Capture TX               : No [RX only]
IP Defragment            : No
Socket Mode              : Standard
Transparent mode         : Yes [mode 2]
Total plugins            : 0
Cluster Fragment Queue   : 0
Cluster Fragment Discard : 0


2)

pfring:
  - interface: eth3
    # Number of receive threads (>1 will enable experimental flow pinned
    # runmode)
    threads: 2

    # Default clusterid.  PF_RING will load balance packets based on flow.
    # All threads/processes that will participate need to have the same
    # clusterid.
    cluster-id: 99

    # Default PF_RING cluster type. PF_RING can load balance per flow
or per hash.
    # This is only supported in versions of PF_RING > 4.1.1.
    cluster-type: cluster_round_robin
    # Choose checksum verification mode for the interface. At the moment
    # of the capture, some packets may be with an invalid checksum due to
    # offloading to the network card of the checksum computation.
    # Possible values are:
    #  - rxonly: only compute checksum for packets received by network card.
    #  - yes: checksum validation is forced
    #  - no: checksum validation is disabled
    #  - auto: suricata uses a statistical approach to detect when
    #  checksum off-loading is used. (default)
    # Warning: 'checksum-validation' must be set to yes to have any validation
    #checksum-checks: auto

3)

 suricata --build-info
This is Suricata version 2.0.4 RELEASE
Features: PCAP_SET_BUFF LIBPCAP_VERSION_MAJOR=1 PF_RING AF_PACKET
HAVE_PACKET_FANOUT LIBCAP_NG LIBNET1.1 HAVE_HTP_URI_NORMALIZE_HOOK
HAVE_NSS HAVE_LIBJANSSON PROFILING
SIMD support: none
Atomic intrisics: 1 2 4 8 byte(s)
64-bits, Little-endian architecture
GCC version 4.4.7 20120313 (Red Hat 4.4.7-4), C version 199901
L1 cache line size (CLS)=64
compiled with LibHTP v0.5.15, linked against LibHTP v0.5.15
Suricata Configuration:
  AF_PACKET support:                       yes
  PF_RING support:                         yes
  NFQueue support:                         no
  NFLOG support:                           no
  IPFW support:                            no
  DAG enabled:                             no
  Napatech enabled:                        no
  Unix socket enabled:                     yes
  Detection enabled:                       yes

  libnss support:                          yes
  libnspr support:                         yes
  libjansson support:                      yes
  Prelude support:                         no
  PCRE jit:                                no
  LUA support:                             no
  libluajit:                               no
  libgeoip:                                yes
  Non-bundled htp:                         no
  Old barnyard2 support:                   no
  CUDA enabled:                            no

  Suricatasc install:                      yes

  Unit tests enabled:                      no
  Debug output enabled:                    no
  Debug validation enabled:                no
  Profiling enabled:                       yes
  Profiling locks enabled:                 no
  Coccinelle / spatch:                     no

Generic build parameters:
  Installation prefix (--prefix):          /opt/suricata
  Configuration directory (--sysconfdir):  /opt/suricata/etc/suricata/
  Log directory (--localstatedir) :        /opt/suricata/var/log/suricata/

  Host:                                    x86_64-unknown-linux-gnu
  GCC binary:                              gcc
  GCC Protect enabled:                     no
  GCC march native enabled:                no
  GCC Profile enabled:                     no


In this vm, I have a moloch instance to do some tests also. Moloch
listens in eth2. I have changed transparent_mode to 1 in pf_ring
module and I setup suricata to listen in the same interface. Result:
all works.


So, when I configure pf_ring module to use transparent_mode to 2 and I
use a different interface for suricata (in my case, eth3), it doesn't
works. But If I setup pf_ring module to use transparent_mode to 1 and
suricata listens in the same net device as a Moloch instance, all
works.

Any ideas why??


I use e1000 driver provided by pf_ring package in both tests ....



More information about the Oisf-users mailing list