[Oisf-users] Zero packets captured with suricata 2.0.4+PFRING 6.0.2
C. L. Martinez
carlopmart at gmail.com
Fri Oct 10 07:45:45 UTC 2014
Hi all,
I am doing some tests using suricata 2.0.4+pfring 6.0.2 (host is a
CentOS 6.5) inside a vm.
For my surprise, no packets are captured:
Suricata stats:
-------------------------------------------------------------------
Date: 10/10/2014 -- 07:40:34 (uptime: 0d, 00h 05m 01s)
-------------------------------------------------------------------
Counter | TM Name | Value
-------------------------------------------------------------------
capture.kernel_packets | RxPFReth31 | 0
capture.kernel_drops | RxPFReth31 | 0
dns.memuse | RxPFReth31 | 0
dns.memcap_state | RxPFReth31 | 0
dns.memcap_global | RxPFReth31 | 0
decoder.pkts | RxPFReth31 | 0
decoder.bytes | RxPFReth31 | 0
.......
tcp.sessions | RxPFReth31 | 0
tcp.ssn_memcap_drop | RxPFReth31 | 0
tcp.pseudo | RxPFReth31 | 0
tcp.invalid_checksum | RxPFReth31 | 0
tcp.no_flow | RxPFReth31 | 0
tcp.reused_ssn | RxPFReth31 | 0
tcp.memuse | RxPFReth31 | 0
tcp.syn | RxPFReth31 | 0
tcp.synack | RxPFReth31 | 0
tcp.rst | RxPFReth31 | 0
tcp.segment_memcap_drop | RxPFReth31 | 0
tcp.stream_depth_reached | RxPFReth31 | 0
tcp.reassembly_memuse | RxPFReth31 | 0
tcp.reassembly_gap | RxPFReth31 | 0
http.memuse | RxPFReth31 | 0
http.memcap | RxPFReth31 | 0
detect.alert | RxPFReth31 | 0
flow_mgr.closed_pruned | FlowManagerThread | 0
flow_mgr.new_pruned | FlowManagerThread | 0
flow_mgr.est_pruned | FlowManagerThread | 0
flow.memuse | FlowManagerThread | 7074304
flow.spare | FlowManagerThread | 10000
flow.emerg_mode_entered | FlowManagerThread | 0
flow.emerg_mode_over | FlowManagerThread | 0
Inside suricata.log, it is confirmed:
10/10/2014 -- 07:35:44 - <Info> - segment pool: pktsize 4, prealloc 256
10/10/2014 -- 07:35:44 - <Info> - segment pool: pktsize 16, prealloc 512
10/10/2014 -- 07:35:44 - <Info> - segment pool: pktsize 112, prealloc 512
10/10/2014 -- 07:35:44 - <Info> - segment pool: pktsize 248, prealloc 512
10/10/2014 -- 07:35:44 - <Info> - segment pool: pktsize 512, prealloc 512
10/10/2014 -- 07:35:44 - <Info> - segment pool: pktsize 768, prealloc 1024
10/10/2014 -- 07:35:44 - <Info> - segment pool: pktsize 1276, prealloc 1024
10/10/2014 -- 07:35:44 - <Info> - segment pool: pktsize 1425, prealloc 1024
10/10/2014 -- 07:35:44 - <Info> - segment pool: pktsize 1448, prealloc 1024
10/10/2014 -- 07:35:44 - <Info> - segment pool: pktsize 65535, prealloc 1024
10/10/2014 -- 07:35:44 - <Info> - stream.reassembly "chunk-prealloc": 1024
10/10/2014 -- 07:35:44 - <Notice> - all 1 packet processing threads, 3
management threads initialized, engine started.
10/10/2014 -- 07:41:37 - <Notice> - Signal Received. Stopping engine.
10/10/2014 -- 07:41:37 - <Info> - 0 new flows, 0 established flows
were timed out, 0 flows in closed state
and from pfring side:
[root at testpf pf_ring]# cat 7696-eth3.4
Bound Device(s) : eth3
Active : 1
Breed : Non-DNA
Sampling Rate : 1
Capture Direction : RX+TX
Socket Mode : RX+TX
Appl. Name : Suricata
IP Defragment : No
BPF Filtering : Enabled
# Sw Filt. Rules : 0
# Hw Filt. Rules : 0
Poll Pkt Watermark : 128
Num Poll Calls : 1033
Channel Id Mask : 0xFFFFFFFF
Cluster Id : 99
Slot Version : 16 [6.0.2]
Min Num Slots : 65538
Bucket Len : 1514
Slot Len : 1552 [bucket+header]
Tot Memory : 101724160
Tot Packets : 0
Tot Pkt Lost : 0
Tot Insert : 0
Tot Read : 0
Insert Offset : 0
Remove Offset : 0
TX: Send Ok : 0
TX: Send Errors : 0
Reflect: Fwd Ok : 0
Reflect: Fwd Errors: 0
Num Free Slots : 65538
But if I switch to pcap or af-packet capture, all works ok.
Any idea why with pf_ring doesn't works?? I have tried the following
options for pf_ring module "transparent_mode=0" and
"transparent_mode=2" (I am using e1000 driver compiled from pf_ring
source), without luck.
Suricata is compiled from source, and of course with pfring option:
[root at testpf bin]# ldd suricata
linux-vdso.so.1 => (0x00007ffffd7ff000)
libhtp-0.5.15.so.1 => /opt/suricata/lib/libhtp-0.5.15.so.1
(0x00007f91cf6a5000)
libGeoIP.so.1 => /usr/lib64/libGeoIP.so.1 (0x00007f91cf46a000)
libmagic.so.1 => /usr/lib64/libmagic.so.1 (0x00007f91cf24c000)
libcap-ng.so.0 => /lib64/libcap-ng.so.0 (0x00007f91cf047000)
libpcap.so.1 => /opt/pfring/lib/libpcap.so.1 (0x00007f91cedbd000)
libpfring.so => /opt/pfring/lib/libpfring.so (0x00007f91ceb63000)
libnet.so.1 => /lib64/libnet.so.1 (0x00007f91ce94a000)
libjansson.so.4 => /usr/lib64/libjansson.so.4 (0x00007f91ce73e000)
libyaml-0.so.2 => /usr/lib64/libyaml-0.so.2 (0x00007f91ce51f000)
libpcre.so.0 => /lib64/libpcre.so.0 (0x00007f91ce2f3000)
librt.so.1 => /lib64/librt.so.1 (0x00007f91ce0ea000)
libssl3.so => /usr/lib64/libssl3.so (0x00007f91cdeab000)
libsmime3.so => /usr/lib64/libsmime3.so (0x00007f91cdc7f000)
libnss3.so => /usr/lib64/libnss3.so (0x00007f91cd93f000)
libnssutil3.so => /usr/lib64/libnssutil3.so (0x00007f91cd713000)
libplds4.so => /lib64/libplds4.so (0x00007f91cd50f000)
libplc4.so => /lib64/libplc4.so (0x00007f91cd309000)
libnspr4.so => /lib64/libnspr4.so (0x00007f91cd0cb000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f91cceae000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007f91ccca9000)
libc.so.6 => /lib64/libc.so.6 (0x00007f91cc915000)
libz.so.1 => /lib64/libz.so.1 (0x00007f91cc6ff000)
/lib64/ld-linux-x86-64.so.2 (0x00007f91cf8c4000)
libnl.so.1 => /lib64/libnl.so.1 (0x00007f91cc4ac000)
libnuma.so.1 => /usr/lib64/libnuma.so.1 (0x00007f91cc2a3000)
libm.so.6 => /lib64/libm.so.6 (0x00007f91cc01e000)
More information about the Oisf-users
mailing list