[Oisf-users] Loosing too much packets under FreeBSD 12
Carlos Lopez
clopmz at outlook.com
Tue Jan 22 16:10:17 UTC 2019
Hi all,
After changin my packet capture from netmap to pcap, I am loosing too much packets:
stats.log:
------------------------------------------------------------------------------------
Date: 1/22/2019 -- 15:50:36 (uptime: 0d, 00h 13m 44s)
------------------------------------------------------------------------------------
Counter | TM Name | Value
------------------------------------------------------------------------------------
capture.kernel_packets | Total | 9680582028
capture.kernel_drops | Total | 61499059
decoder.pkts | Total | 17071994
decoder.bytes | Total | 11968982778
decoder.ipv4 | Total | 17071994
decoder.ethernet | Total | 17071994
decoder.tcp | Total | 14457431
decoder.udp | Total | 2592401
decoder.icmpv4 | Total | 22090
decoder.vlan | Total | 17071994
decoder.avg_pkt_size | Total | 701
decoder.max_pkt_size | Total | 1518
flow.tcp | Total | 658151
flow.udp | Total | 494629
defrag.ipv4.fragments | Total | 72
decoder.ipv4.frag_overlap | Total | 31
tcp.sessions | Total | 359170
tcp.syn | Total | 1173464
tcp.synack | Total | 1867315
tcp.rst | Total | 30
tcp.pkt_on_wrong_thread | Total | 11555373
tcp.reassembly_gap | Total | 1131
tcp.overlap | Total | 13406
detect.mpm_list | Total | 4
detect.nonmpm_list | Total | 1186
detect.fnonmpm_list | Total | 1184
detect.match_list | Total | 1188
app_layer.tx.http | Total | 19044
app_layer.flow.enip | Total | 7
app_layer.flow.dns_udp | Total | 300760
app_layer.tx.dns_udp | Total | 430782
app_layer.tx.enip | Total | 61
app_layer.flow.failed_udp | Total | 193862
flow_mgr.closed_pruned | Total | 95509
flow_mgr.new_pruned | Total | 564713
flow_mgr.est_pruned | Total | 237070
flow.spare | Total | 10000
flow.tcp_reuse | Total | 161417
flow_mgr.flows_checked | Total | 13287
flow_mgr.flows_notimeout | Total | 13219
flow_mgr.flows_timeout | Total | 68
flow_mgr.flows_removed | Total | 68
flow_mgr.rows_checked | Total | 65536
flow_mgr.rows_skipped | Total | 62629
flow_mgr.rows_empty | Total | 20
flow_mgr.rows_maxlen | Total | 15
tcp.memuse | Total | 9175040
tcp.reassembly_memuse | Total | 1572864
dns.memcap_global | Total | 1136320
flow.memuse | Total | 76508496
suricata.log:
22/1/2019 -- 15:50:35 - <Info> - time elapsed 763.413s
22/1/2019 -- 15:50:36 - <Info> - (W#01-ix1) Packets 1689324, bytes 1308267087
22/1/2019 -- 15:50:36 - <Info> - (W#01-ix1) Pcap Total:1068099841 Recv:1060298142 Drop:7801699 (0.7%).
22/1/2019 -- 15:50:36 - <Info> - (W#02-ix1) Packets 1692015, bytes 1312722602
22/1/2019 -- 15:50:36 - <Info> - (W#02-ix1) Pcap Total:1068075898 Recv:1060277257 Drop:7798641 (0.7%).
22/1/2019 -- 15:50:36 - <Info> - (W#03-ix1) Packets 1691655, bytes 1310557721
22/1/2019 -- 15:50:36 - <Info> - (W#03-ix1) Pcap Total:1068049021 Recv:1060250211 Drop:7798810 (0.7%).
22/1/2019 -- 15:50:36 - <Info> - (W#04-ix1) Packets 1688237, bytes 1301586167
22/1/2019 -- 15:50:36 - <Info> - (W#04-ix1) Pcap Total:1068021290 Recv:1060218922 Drop:7802368 (0.7%).
22/1/2019 -- 15:50:36 - <Info> - (W#05-ix1) Packets 1691769, bytes 1314176241
22/1/2019 -- 15:50:36 - <Info> - (W#05-ix1) Pcap Total:1067994742 Recv:1060196795 Drop:7797947 (0.7%).
22/1/2019 -- 15:50:36 - <Info> - (W#06-ix1) Packets 1690488, bytes 1313213599
22/1/2019 -- 15:50:36 - <Info> - (W#06-ix1) Pcap Total:1067968950 Recv:1060169730 Drop:7799220 (0.7%).
22/1/2019 -- 15:50:36 - <Info> - (W#07-ix1) Packets 1697820, bytes 1323139515
22/1/2019 -- 15:50:36 - <Info> - (W#07-ix1) Pcap Total:1067945068 Recv:1060153211 Drop:7791857 (0.7%).
22/1/2019 -- 15:50:36 - <Info> - (W#08-ix1) Packets 1691809, bytes 1302342717
22/1/2019 -- 15:50:36 - <Info> - (W#08-ix1) Pcap Total:1067924454 Recv:1060126745 Drop:7797709 (0.7%).
22/1/2019 -- 15:50:36 - <Info> - (W#01-ix2) Packets 442381, bytes 185383013
22/1/2019 -- 15:50:36 - <Info> - (W#01-ix2) Pcap Total:155726061 Recv:155723752 Drop:2309 (0.0%).
22/1/2019 -- 15:50:36 - <Info> - (W#02-ix2) Packets 442476, bytes 185420171
22/1/2019 -- 15:50:36 - <Info> - (W#02-ix2) Pcap Total:155721690 Recv:155719541 Drop:2149 (0.0%).
22/1/2019 -- 15:50:36 - <Info> - (W#03-ix2) Packets 442339, bytes 185363477
22/1/2019 -- 15:50:36 - <Info> - (W#03-ix2) Pcap Total:155716363 Recv:155714037 Drop:2326 (0.0%).
22/1/2019 -- 15:50:36 - <Info> - (W#04-ix2) Packets 442320, bytes 185353322
22/1/2019 -- 15:50:36 - <Info> - (W#04-ix2) Pcap Total:155711487 Recv:155709167 Drop:2320 (0.0%).
22/1/2019 -- 15:50:36 - <Info> - (W#05-ix2) Packets 442432, bytes 185399035
22/1/2019 -- 15:50:36 - <Info> - (W#05-ix2) Pcap Total:155706908 Recv:155704789 Drop:2119 (0.0%).
22/1/2019 -- 15:50:36 - <Info> - (W#06-ix2) Packets 442318, bytes 185359304
22/1/2019 -- 15:50:36 - <Info> - (W#06-ix2) Pcap Total:155702503 Recv:155700129 Drop:2374 (0.0%).
22/1/2019 -- 15:50:36 - <Info> - (W#07-ix2) Packets 442376, bytes 185376317
22/1/2019 -- 15:50:36 - <Info> - (W#07-ix2) Pcap Total:155697326 Recv:155695156 Drop:2170 (0.0%).
22/1/2019 -- 15:50:36 - <Info> - (W#08-ix2) Packets 442235, bytes 185322490
22/1/2019 -- 15:50:36 - <Info> - (W#08-ix2) Pcap Total:155692469 Recv:155690118 Drop:2351 (0.0%).
22/1/2019 -- 15:50:36 - <Info> - Alerts: 0
22/1/2019 -- 15:50:37 - <Info> - cleaning up signature grouping structure... complete
22/1/2019 -- 15:50:37 - <Notice> - Stats for 'ix1': pkts: 13533117, drop: 7690885 (56.83%), invalid chksum: 0
22/1/2019 -- 15:50:37 - <Notice> - Stats for 'ix2': pkts: 3538877, drop: 0 (0.00%), invalid chksum: 0
My suricata startup command is:
/usr/local/bin/suricata -D -vvv -k none --pcap=ix1 --pcap=ix2 --pidfile /var/run/suricata.pid -c /etc/suricata/suricata.yaml
Some relevant config is:
flow:
memcap: 512mb
hash-size: 65536
prealloc: 10000
emergency-recovery: 30
#managers: 1 # default to one flow manager
#recyclers: 1 # default to one flow recycler thread
# Cross platform libpcap capture support
pcap:
- interface: ix1
# On Linux, pcap will try to use mmaped capture and will use buffer-size
# as total of memory used by the ring. So set this to something bigger
# than 1% of your bandwidth.
buffer-size: 16777216
#bpf-filter: "tcp and port 25"
# Choose checksum verification mode for the interface. At the moment
# of the capture, some packets may be with an invalid checksum due to
# offloading to the network card of the checksum computation.
# Possible values are:
# - yes: checksum validation is forced
# - no: checksum validation is disabled
# - auto: Suricata uses a statistical approach to detect when
# checksum off-loading is used. (default)
# Warning: 'checksum-validation' must be set to yes to have any validation
checksum-checks: no
# With some accelerator cards using a modified libpcap (like myricom), you
# may want to have the same number of capture threads as the number of capture
# rings. In this case, set up the threads variable to N to start N threads
# listening on the same interface.
threads: 8
# set to no to disable promiscuous mode:
#promisc: no
# set snaplen, if not set it defaults to MTU if MTU can be known
# via ioctl call and to full capture if not.
#snaplen: 1518
- interface: ix2
# On Linux, pcap will try to use mmaped capture and will use buffer-size
# as total of memory used by the ring. So set this to something bigger
# than 1% of your bandwidth.
buffer-size: 16777216
#bpf-filter: "tcp and port 25"
# Choose checksum verification mode for the interface. At the moment
# of the capture, some packets may be with an invalid checksum due to
# offloading to the network card of the checksum computation.
# Possible values are:
# - yes: checksum validation is forced
# - no: checksum validation is disabled
# - auto: Suricata uses a statistical approach to detect when
# checksum off-loading is used. (default)
# Warning: 'checksum-validation' must be set to yes to have any validation
checksum-checks: no
# With some accelerator cards using a modified libpcap (like myricom), you
# may want to have the same number of capture threads as the number of capture
# rings. In this case, set up the threads variable to N to start N threads
# listening on the same interface.
threads: 8
# Put default values here
- interface: default
#checksum-checks: auto
# Stream engine settings. Here the TCP stream tracking and reassembly
# engine is configured.
stream:
memcap: 12gb
checksum-validation: no # reject wrong csums
inline: auto # auto will use inline mode in IPS mode, yes or no set it statically
reassembly:
memcap: 24gb
depth: 6mb # reassemble 1mb into a stream
toserver-chunk-size: 2560
toclient-chunk-size: 2560
randomize-chunk-size: yes
#randomize-chunk-range: 10
#raw: yes
#segment-prealloc: 2048
#check-overlap-different-data: true
Hardware:
[1] CPU: Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz (2893.09-MHz K8-class CPU)
[1] Origin="GenuineIntel" Id=0x206d7 Family=0x6 Model=0x2d Stepping=7
[1] Features=0xbfebfbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,DTS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE>
[1] Features2=0x1fbee3ff<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,DCA,SSE4.1,SSE4.2,x2APIC,POPCNT,TSCDLT,AESNI,XSAVE,OSXSAVE,AVX>
[1] AMD Features=0x2c100800<SYSCALL,NX,Page1GB,RDTSCP,LM>
[1] AMD Features2=0x1<LAHF>
[1] XSAVE Features=0x1<XSAVEOPT>
[1] VT-x: PAT,HLT,MTF,PAUSE,EPT,UG,VPID
[1] TSC: P-state invariant, performance statistics
[1] real memory = 274877906944 (262144 MB)
[1] avail memory = 267810459648 (255403 MB)
I think my problem is with interrupts according to vmstat output:
interrupt total rate
irq9: acpi0 2 0
irq16: ehci0 ehci1 83670 3
cpu0:timer 21018537 790
cpu1:timer 21029511 791
cpu2:timer 21028699 790
cpu3:timer 20830840 783
cpu4:timer 20916458 786
cpu5:timer 20843299 784
cpu6:timer 20768280 781
cpu7:timer 20905575 786
cpu8:timer 23733687 892
cpu9:timer 23806865 895
cpu10:timer 23650927 889
cpu11:timer 23836563 896
cpu12:timer 23825227 896
cpu13:timer 23922185 899
cpu14:timer 23981397 901
cpu15:timer 24081839 905
irq264: ix0:rxq0 1753857 66
irq265: ix0:rxq1 5449242 205
irq266: ix0:rxq2 655583 25
irq267: ix0:rxq3 229 0
irq268: ix0:rxq4 2378 0
irq269: ix0:rxq5 9763 0
irq270: ix0:rxq6 4507093 169
irq271: ix0:rxq7 583 0
irq272: ix0:aq 2 0
irq273: ix1:rxq0 282617173 10624
irq274: ix1:rxq1 281288995 10574
irq275: ix1:rxq2 260379019 9788
irq276: ix1:rxq3 240058835 9024
irq277: ix1:rxq4 240313773 9034
irq278: ix1:rxq5 240624818 9045
irq279: ix1:rxq6 240963768 9058
irq280: ix1:rxq7 259415783 9752
irq281: ix1:aq 2 0
irq282: mfi0 126741 5
irq283: ahci0 31 0
irq284: ix2:rxq0 225230947 8467
irq285: ix2:rxq1 223607841 8406
irq286: ix2:rxq2 224104742 8424
irq287: ix2:rxq3 221739636 8335
irq288: ix2:rxq4 221696673 8334
irq289: ix2:rxq5 219087393 8236
irq290: ix2:rxq6 223915854 8417
irq291: ix2:rxq7 220911327 8304
irq292: ix2:aq 2 0
Total 4196725644 157761
Am i right?
Regards,
C. L. Martinez
More information about the Oisf-users
mailing list