<html>
<head>
<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 12pt;
font-family:Calibri
}
--></style></head>
<body class='hmmessage'><div dir='ltr'>I have a pretty beefy server monitoring two SPAN ports. A lot of packets are flowing in there, mostly HTTP stuff.<div>I have 40 logical CPUs (20 per SPAN Port). I am using PF_RING.</div><div><br></div><div>I've noticed that I get an occasional packet loss and it's a burst of packets. After that it is fine for days.</div><div>So couple of PF Ring instances report packet loss  (ie cat /proc/net/pf_ring/*eth* | grep "Tot Pkt Lost".</div><div>Here is the first event after packet loss occured. It happens within a minute and then stops.</div><div><div>capture.kernel_packets    | RxPFReth213               | 366410186</div><div>capture.kernel_drops      | RxPFReth213               | 639312</div><div>dns.memuse                | RxPFReth213               | 3089534</div><div>dns.memcap_state          | RxPFReth213               | 0</div><div>dns.memcap_global         | RxPFReth213               | 0</div><div>decoder.pkts              | RxPFReth213               | 366410186</div><div>decoder.bytes             | RxPFReth213               | 268927212125</div><div>decoder.invalid           | RxPFReth213               | 4111244</div><div>decoder.ipv4              | RxPFReth213               | 365972036</div><div>decoder.ipv6              | RxPFReth213               | 104297</div><div>decoder.ethernet          | RxPFReth213               | 366410186</div><div>decoder.raw               | RxPFReth213               | 0</div><div>decoder.sll               | RxPFReth213               | 0</div><div>decoder.tcp               | RxPFReth213               | 267634781</div><div>decoder.udp               | RxPFReth213               | 7537800</div><div>decoder.sctp              | RxPFReth213               | 0</div><div>decoder.icmpv4            | RxPFReth213               | 325917</div><div>decoder.icmpv6            | RxPFReth213               | 0</div><div>decoder.ppp               | RxPFReth213               | 0</div><div>decoder.pppoe             | RxPFReth213               | 0</div><div>decoder.gre               | RxPFReth213               | 0</div><div>decoder.vlan              | RxPFReth213               | 0</div><div>decoder.vlan_qinq         | RxPFReth213               | 0</div><div>decoder.teredo            | RxPFReth213               | 1410</div><div>decoder.ipv4_in_ipv6      | RxPFReth213               | 0</div><div>decoder.ipv6_in_ipv6      | RxPFReth213               | 0</div><div>decoder.avg_pkt_size      | RxPFReth213               | 733</div><div>decoder.max_pkt_size      | RxPFReth213               | 1514</div><div>defrag.ipv4.fragments     | RxPFReth213               | 84459996</div><div>defrag.ipv4.reassembled   | RxPFReth213               | 180</div><div>defrag.ipv4.timeouts      | RxPFReth213               | 0</div><div>defrag.ipv6.fragments     | RxPFReth213               | 0</div><div>defrag.ipv6.reassembled   | RxPFReth213               | 0</div><div>defrag.ipv6.timeouts      | RxPFReth213               | 0</div><div>defrag.max_frag_hits      | RxPFReth213               | 0</div><div>tcp.sessions              | RxPFReth213               | 2160679</div><div>tcp.ssn_memcap_drop       | RxPFReth213               | 0</div><div>tcp.pseudo                | RxPFReth213               | 335927</div><div>tcp.invalid_checksum      | RxPFReth213               | 0</div><div>tcp.no_flow               | RxPFReth213               | 0</div><div>tcp.reused_ssn            | RxPFReth213               | 1624</div><div>tcp.memuse                | RxPFReth213               | 15770704</div><div>tcp.syn                   | RxPFReth213               | 2457006</div><div>tcp.synack                | RxPFReth213               | 2182331</div><div>tcp.rst                   | RxPFReth213               | 1386908</div><div>tcp.segment_memcap_drop   | RxPFReth213               | 0</div><div>tcp.stream_depth_reached  | RxPFReth213               | 328</div><div>tcp.reassembly_memuse     | RxPFReth213               | 40356260000</div><div>tcp.reassembly_gap        | RxPFReth213               | 766124</div><div>http.memuse               | RxPFReth213               | 85581753</div><div>http.memcap               | RxPFReth213               | 0</div><div>detect.alert              | RxPFReth213               | 6375</div></div><div><br></div><div>I just hope it is not an attack attempt to evade IDS.</div><div><br></div><div>Any help would be appreciated.</div><div><br></div><div>Thanks.</div>                                     </div></body>
</html>