[Oisf-users] Massive kernel drops with HTTP traffic

Konstantin Klinger konstantin.klinger at dcso.de
Tue Aug 21 07:20:09 UTC 2018


Good morning all,

I've made multiple tests with different settings and you can find the
results (drops in percentage) for each run in the attached table. We
will rewrite our filestore rules without the "filemagic" keyword and try
them in production. Further I will open a bug report.

Thanks,

Konstantin

On 20.08.2018 15:43, Peter Manev wrote:
> 
> 
>> On 20 Aug 2018, at 07:39, Konstantin Klinger <konstantin.klinger at dcso.de> wrote:
>>
>>
>>
>>> On 20.08.2018 15:29, Peter Manev wrote:
>>>
>>>
>>>> On 20 Aug 2018, at 06:53, Konstantin Klinger <konstantin.klinger at dcso.de> wrote:
>>>>
>>>>
>>>>
>>>>> On 18.08.2018 15:57, Peter Manev wrote:
>>>>>
>>>>>
>>>>>> On 17 Aug 2018, at 07:35, Michael Stone <mstone at mathom.us> wrote:
>>>>>>
>>>>>> On Fri, Aug 17, 2018 at 03:24:31PM +0200, you wrote:
>>>>>>>> Do you have filemagic enabled?
>>>>>>>
>>>>>>> Yes. We currently use filestore v1. And we use the filemagic value in
>>>>>>> our rules for filestoring.
>>>>>>
>>>>>> Unless you have customized the magic file it is very likely that you won't hit your performance target this way. I'd suggest rules specific to what you're trying to save rather than relying on libmagic (which is very inefficient).
>>>>>>
>>>>>
>>>>>
>>>>> That could be easy to test and confirm if it is contributing or creating the mess- Konstantin is it possible to try it out and see ?
>>>>>
>>>>>
>>>>
>>>> We made some test runs without filestore enabled and after that only
>>>> without libmagic/filemagic (but filestore on) and that helped to
>>>> decrease the number of packet drops (~30% -> ~5% and ~50% -> ~10%).
>>>> Thank you. Our workaround will be not using filemagic rules anymore.
>>>>
>>>
>>> If I remember correctly (please correct me if otherwise) you had a test run where you run Suri with no rules and the drops where still bad (30%+) - what is different from that test and the tests you mentioned above ? (Just having filestore switched to enabled in yaml?)
>>
>> Yes, you are completely right. With filestore enabled in yaml and no
>> rules loaded we had still 30%+ packet drops.
> 
> If not mistaken this is filestore v1 , correct ?
> Is this the case with filestore v2 as well ?
> Can you please post a bug report describing all the findings including Suricata version you are using (latest git if not mistaken?)
> 
> 
>>
>>>
>>>
>>>> @Mike: Do you have further experience in a workaround to not use libmagic?
>>>> @all: Is someone using libmagic/filemagic on high traffic sensors
>>>> (>5Gb/sec) and has no performance issues? Is someone already using
>>>> filestore v2 (we are still using v1) and has any experience with it's
>>>> performance?
>>>>


-- 
Konstantin Klinger
Security Content Engineer
Threat Detection & Hunting (TDH)

+49 160 95476260
konstantin.klinger at dcso.de

dcso.de
blog.dcso.de

PGP: 180D C5B3 3C68 5C9A FB58 6F33 400E 5A35 3307 8D46
 
DCSO Deutsche Cyber-Sicherheitsorganisation GmbH • EUREF-Campus
22 • D-10829 Berlin
Geschäftsführer: Dr.-Ing. Gunnar Siebert, Sitz der Gesellschaft: Berlin,
Amtsgericht Charlottenburg HRB 172382
-------------- next part --------------
Suricata 4.1.0-dev (git20180620)

Runs made on work time for 5-10 minutes.

Filestore v1:

 - file-store:
      enabled: yes/no       # set to yes to enable
      log-dir: files    # directory to store the files
      force-magic: yes/no   # force logging magic on all stored files
      force-hash: [md5,sha1,sha256]
      force-filestore: no # force storing of all files
      stream-depth: 0
      waldo: file.waldo # waldo file to store the file_id across runs
      include-pid: no # set to yes to include pid in file names


            |--------------------|--------------------|--------------------|--------------------|--------------------|--------------------|
            |filestore (v1) = on |filestore (v1) = on |filestore (v1) = off|filestore (v1) = off|filestore (v1) = on |filestore (v1) = on |
            |rules = loaded      |rules = no rules    |rules = loaded      |rules = no rules    |rules = loaded      |rules = no rules    |
            |force-magic = on    |force-magic = on    |force-magic = off   |force-magic = off   |force-magic = off   |force-magic = off   |
|-----------|--------------------|--------------------|--------------------|--------------------|--------------------|--------------------|
|Sensor A   |  ~34%              |  ~36%              |  ~6%               |  ~0%               |  ~20%              |  ~3%               |
|(~6GB/sec) |                    |                    |                    |                    |                    |                    |
|-----------|--------------------|--------------------|--------------------|--------------------|--------------------|--------------------|
|Sensor B   |  ~48%              |  ~47%              |  ~10%              |  ~0%               |  ~48%              |  ~0%               |
|(~7GB/sec) |                    |                    |                    |                    |                    |                    |
|-----------|--------------------|--------------------|--------------------|--------------------|--------------------|--------------------|


Filestore v2:

 - file-store:
      version: 2
      enabled: yes/no
      dir: filestore
      max-open-files: 100
      xff:
        enabled: yes
        mode: extra-data
        deployment: reverse
        header: X-Forwarded-For


            |--------------------|--------------------|
            |filestore (v2) = on |filestore (v2) = on |
            |rules = loaded      |rules = no rules    |
|-----------|--------------------|--------------------|
|Sensor A   |  ~7%               |  ~0%               |
|(~6GB/sec) |                    |                    |
|-----------|--------------------|--------------------|
|Sensor B   |  ~33%              |  ~0%               |
|(~7GB/sec) |                    |                    |
|-----------|--------------------|--------------------|




More information about the Oisf-users mailing list