[Oisf-users] What does it means??

C. L. Martinez carlopmart at gmail.com
Wed Oct 9 13:06:59 UTC 2013


On Wed, Oct 9, 2013 at 12:39 PM, Victor Julien <lists at inliniac.net> wrote:
> On 10/09/2013 02:36 PM, C. L. Martinez wrote:
>> On Wed, Oct 9, 2013 at 12:31 PM, Victor Julien <lists at inliniac.net> wrote:
>>> On 10/09/2013 02:28 PM, C. L. Martinez wrote:
>>>> Hi all,
>>>>
>>>>  Recently, I have installed a FreeBSD 9.2 host with suricata 1.4.6 and
>>>> returns me a lot of packets dropped by kernel:
>>>>
>>>> For example after 2 minutes up:
>>>>
>>>> Date: 10/9/2013 -- 12:19:50 (uptime: 0d, 00h 02m 58s)
>>>> -------------------------------------------------------------------
>>>> Counter                   | TM Name                   | Value
>>>> -------------------------------------------------------------------
>>>> capture.kernel_packets    | RxPcapem41                | 3137698
>>>> capture.kernel_drops      | RxPcapem41                | 2415508
>>>> capture.kernel_ifdrops    | RxPcapem41                | 0
>>>>
>>>> But tcp.ssn_memcap_drop and tcp.reassembly_gap:
>>>>
>>>> decoder.avg_pkt_size      | RxPcapem42                | 828
>>>> decoder.max_pkt_size      | RxPcapem42                | 1514
>>>> defrag.ipv4.fragments     | RxPcapem42                | 90
>>>> defrag.ipv4.reassembled   | RxPcapem42                | 25
>>>> defrag.ipv4.timeouts      | RxPcapem42                | 0
>>>> defrag.ipv6.fragments     | RxPcapem42                | 0
>>>> defrag.ipv6.reassembled   | RxPcapem42                | 0
>>>> defrag.ipv6.timeouts      | RxPcapem42                | 0
>>>> defrag.max_frag_hits      | RxPcapem42                | 0
>>>> tcp.sessions              | RxPcapem42                | 308
>>>> tcp.ssn_memcap_drop       | RxPcapem42                | 0
>>>> tcp.pseudo                | RxPcapem42                | 23
>>>> tcp.invalid_checksum      | RxPcapem42                | 0
>>>> tcp.no_flow               | RxPcapem42                | 0
>>>> tcp.reused_ssn            | RxPcapem42                | 0
>>>> tcp.memuse                | RxPcapem42                | 6029312
>>>> tcp.syn                   | RxPcapem42                | 1261
>>>> tcp.synack                | RxPcapem42                | 702
>>>> tcp.rst                   | RxPcapem42                | 565
>>>> tcp.segment_memcap_drop   | RxPcapem42                | 0
>>>> tcp.stream_depth_reached  | RxPcapem42                | 0
>>>> tcp.reassembly_memuse     | RxPcapem42                | 11327048
>>>> tcp.reassembly_gap        | RxPcapem42                | 23
>>>
>>> tcp.ssn_memcap_drop and tcp.reassembly_gap only related to memcaps, not
>>> to packet loss.
>>>
>>>> I think the problem is with interrupts:
>>>>
>>>> interrupt                          total       rate
>>>> irq1: atkbd0                           6          0
>>>> irq10: em2 em3                   2320880       3453
>>>> irq11: em0 em1 em4+              1256951       1870
>>>> cpu0:timer                        148773        221
>>>> cpu1:timer                        148310        220
>>>> Total                            3877066       5769
>>>
>>> Not sure.
>>>
>>> What runmode are you using? Also, whats your max-pending-packets setting?
>>>
>>
>> I use runmode workers and for max-pending-packets 12288 ...
>
> By using workers with multiple interfaces you get just one thread per
> interface. There is no flow based load balancing in plain libpcap, so I
> think runmode autofp may get you better results as then Suricata can use
> more threads per interface.
>
> --

More or less, same numbers using autofp runmode:

-------------------------------------------------------------------
Date: 10/9/2013 -- 13:05:07 (uptime: 0d, 00h 03m 18s)
-------------------------------------------------------------------
Counter                   | TM Name                   | Value
-------------------------------------------------------------------
capture.kernel_packets    | RxPcapem41                | 2283902
capture.kernel_drops      | RxPcapem41                | 1717154
capture.kernel_ifdrops    | RxPcapem41                | 0



More information about the Oisf-users mailing list