[Oisf-users] High CPU Load with Small Ruleset at 10Gbit/s
Eric Urban
eurban at umn.edu
Tue Aug 20 14:30:16 UTC 2019
For your question about the "high" profile setting over custom, at
https://github.com/OISF/suricata/blob/8b87801b80f16d4b24c419221b49e43370ff6932/src/detect-engine.c#L2237
we can see the toclient-groups and toserver-groups values are each set to
75 in this case. It is possible the recommended settings of 200 from the
docs would lower CPU usage even more, but for us high has worked well so
thought I would let you know that is what we do. Raising those to 200 will
likely increase rule load times even more so there is no reason for us to
more than double these values when our performance is completely fine. I
don't know a ton about the far reaching effects of these settings but I am
guessing the trick is finding the right spot between memory and CPU usage
without getting to the point where there is a a lot of overhead from large
amounts of memory to manage.
--
Eric Urban
University Information Security | Office of Information Technology |
it.umn.edu
University of Minnesota | umn.edu
eurban at umn.edu
On Tue, Aug 20, 2019 at 12:56 AM Fabian Franz <fabfaeb at googlemail.com>
wrote:
> Hi Eric,
> Hi Peter,
>
> thanks for your feedback. It seems like I was able to resolve the issue
> with a combination of the steps you proposed:
>
> 1. I changed the detect profile from the "recommended custom settings"
> to high,
> 2. switched the sgh-mpm-context to full,
> 3. set the max-pending-packets value back to 1024 from 8192,
> 4. took a closer look at septun and followed the cpu-affinity/core
> isolation steps and
> 5. switched up the testing setup a bit.
>
> Now I am facing a load of mostly 20-40% with one or two cores going up to
> 60% at 9Gbit traffic which seems fine to me.
> Could you elaborate on why the "high" detect profile should be preferred
> over the custom settings recommended in the docs (
> https://suricata.readthedocs.io/en/latest/performance/high-performance-config.html)
> and why the lower max-pending-packets could have helped?
> I still need to do some more testing on this and will come back to you if
> I have any new insights - but for now I think this is resolved.
>
> Once again, thanks a lot for your fast response!
>
> Best
> Fabian
>
> Am Di., 20. Aug. 2019 um 01:04 Uhr schrieb Peter Manev <
> petermanev at gmail.com>:
>
>>
>> On 19 Aug 2019, at 12:06, Eric Urban <eurban at umn.edu> wrote:
>>
>> We use Myricom cards with about 35K rules loaded. None of our cores run
>> near 100% load. I saw one period of 6Gbps traffic in the last week on one
>> of our Suricata instances where one core had 43% usage but the other 8 were
>> at about 12%.
>>
>> Have you looked at the Suricata Extreme Performance Tuning guide at
>> https://github.com/pevma/SEPTun? The cpu-affinity settings seem to be
>> covered more in depth there than at the link that you posted.
>>
>> Also, the section at
>> https://suricata.readthedocs.io/en/latest/performance/high-performance-config.html
>> could be of help. We don't use the custom setting recommended there but do
>> use "high" for the profile and "full" for the sgh-mpm-context. Note the
>> warning about significantly longer rule load times though.
>>
>>
>> +1 for “high” context.
>>
>> Fabian:
>> What is your max-pending packets value ?
>> Also in some tests/live set ups it is common to see some CPUs busier than
>> others.
>>
>>
>>
>> --
>> Eric Urban
>> University Information Security | Office of Information Technology |
>> it.umn.edu
>> University of Minnesota | umn.edu
>> eurban at umn.edu
>>
>>
>> On Fri, Aug 16, 2019 at 10:24 AM Fabian Franz <fabfaeb at googlemail.com>
>> wrote:
>>
>>> Hi Everyone,
>>>
>>> I am having a problem with my Suricata setup and hope that someone here
>>> as a hint for me:
>>> I run suricata 4.1.4 together with a myricom card on a server with 128
>>> gigs of RAM and two 16core(+HT) Intel CPUs.
>>> The SNF settings are 30 rings and 32/8gig for ringsizes.
>>>
>>> As long as I do not deploy any rules, suricata runs smoothly with ~20%
>>> CPU load per (worker) core at 9-10 Gbit/s network traffic. However, when I
>>> deploy even small rulesets (e.g. et-shellcode) the CPU load skyrockets with
>>> 100% for 3-6 cores and the rest at around 50%. After a few moments, packets
>>> are dropped, with the SNF drop ring full counter increasing rapidly (at
>>> 9-10Gbit/s, as before). I use hyperscan as mpm-algo and tried to followed
>>> the recommendations at
>>> https://home.regit.org/2012/07/suricata-to-10gbps-and-beyond/ .
>>> <https://home.regit.org/2012/07/suricata-to-10gbps-and-beyond/>
>>> However, I was not able to follow the recommendations regarding IRQ,
>>> since those seemed pretty NIC specific. Is this setup also relevant for
>>> myricom cards?
>>> Additionally, I obviously do not use AF_PACKET but libpcap with 30
>>> threads.
>>>
>>> To test the bandwidth I used iperf with 30 parallel connections. Could
>>> this be the reason why only some of the cores are running at 100% load? If
>>> so, are there any other possiblities to simulate the bandwidth more
>>> realistically?
>>>
>>> Are there any myricom users here that could share performance hints for
>>> myricom+suricata? I feel that (hardware-wise) my setup should have no
>>> problem handling 10Gbit/s with a decent ruleset, right?
>>>
>>> Thanks a lot
>>>
>>> Fabian
>>> _______________________________________________
>>> Suricata IDS Users mailing list: oisf-users at openinfosecfoundation.org
>>> Site: http://suricata-ids.org | Support:
>>> http://suricata-ids.org/support/
>>> List:
>>> https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>>>
>>> Conference: https://suricon.net
>>> Trainings: https://suricata-ids.org/training/
>>
>> _______________________________________________
>> Suricata IDS Users mailing list: oisf-users at openinfosecfoundation.org
>> Site: http://suricata-ids.org | Support: http://suricata-ids.org/support/
>> List: https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>>
>> Conference: https://suricon.net
>> Trainings: https://suricata-ids.org/training/
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20190820/61655f60/attachment.html>
More information about the Oisf-users
mailing list