[Oisf-users] (no subject)

X.qing xqing.summer at gmail.com
Thu Jun 12 05:13:51 UTC 2014


Yes, i have read the article Christophe recommended, all my threads are
used and every CPU core is receiving interrupts of the network card but
just drop a lot, it seems that my problem is not caused by the lack of NIC
queues. no matter what, thank Chritophe very much.

what can be inferred  from this record?
11/6/2014 -- 16:58:29 - <Info> - Flow emergency mode over, back to
normal... unsetting FLOW_EMERGENCY bit (ts.tv_sec: 1402477082,
ts.tv_usec:696562) flow_spare_q status(): 70% flows at the queue

I did not disable irqbanlance before. i have disabled it and run the
suricata for around 50 minutes this morning. here are the latest stats.log
and suricata.log.

https://drive.google.com/file/d/0B6V3lnZlrEKPM3JSYXpFZU5sTkE/edit?usp=sharing
https://drive.google.com/file/d/0B6V3lnZlrEKPVDBRclBrZHB4VkU/edit?usp=sharing

thanx again.
best wishes.



2014-06-12 2:25 GMT+08:00 Peter Manev <petermanev at gmail.com>:

> On Wed, Jun 11, 2014 at 3:32 PM, X.qing <xqing.summer at gmail.com> wrote:
> >
> > Is there anything wrong with my suricata.log?
> > i forgot to tell that the suricata.log comes when i have changed the
> cluster_flow to cluster_cpu and 20 threads to 16threads because i found
> that it can not improve the performance even though more hard resource is
> used, which is different from the yalm file i sent before.
> > the rent of my drop is between 50%-60% under 2-4Gps traffic flow.
> >
> > I am really confused these days.  Greatly appreciate if you could offer
> any suggestions.
> >
> > thank you all. (ಥ_ಥ)
>
> Did you explore any of the suggestions form Christophe ?
> Have you disabled irqbalance ?
>
> Besides the dorps , I also see this in your suricata.log -
> 11/6/2014 -- 16:58:29 - <Info> - Flow emergency mode over, back to
> normal... unsetting FLOW_EMERGENCY bit (ts.tv_sec: 1402477082,
> ts.tv_usec:696562) flow_spare_q status(): 70% flows at the queue
>
> Can you share your stats.log The last entries (last update/write in
> the log) - I assume it will be very long so please use pastebin or
> something similar.
>
> thanks
>
> >
> >
> >
> >
> > 2014-06-11 17:23 GMT+08:00 X.qing <xqing.summer at gmail.com>:
> >
> >>
> >>
> >> Of course~  suricata.log in attachment.
> >>
> >> (It is very nice of you.( ͡° ͜ʖ ͡°) )
> >>
> >>
> >> 2014-06-11 16:33 GMT+08:00 Peter Manev <petermanev at gmail.com>:
> >>
> >>> On Sun, Jun 8, 2014 at 11:01 AM, Christophe Vandeplas
> >>> <christophe at vandeplas.com> wrote:
> >>> > Hi,
> >>> >
> >>> >
> >>> > What kind of drop do you have?
> >>> > - capture.kernel_drops
> >>> > - tcp.segment_memcap_drop
> >>> > - tcp.ssn_memcap_drop
> >>> >
> >>> > Lower the number of threads in the af-packet section to the number of
> >>> > cores your system has. (cat /proc/cpuinfo |  fgrep processor | wc -l
> )
> >>> >
> >>> > Run suricata with no rules, and tweak the configuration, you should
> >>> > have (almost) no packet drop before you activate rules.
> >>> >
> >>> > After having made changes in the yaml configuration file I usually:
> >>> > - stop suricata
> >>> > - empty the logfiles
> >>> > - start suricata
> >>> > This way there's no risk of looking at older logs and misinterpreting
> >>> > configuration changes.
> >>> >
> >>> >
> >>> > If possible, link your stats.log to a monitoring tool to greate
> >>> > graphs. This way you can correlate packet drops by suricata with
> other
> >>> > events on the system. I've written an article about this :
> >>> >
> http://christophe.vandeplas.com/2013/11/suricata-monitoring-with-zabbix-or-other.html
> >>> >  But also other scripts exist.
> >>> > Make sure you edit the suricata_stats.py script with the number of
> >>> > threads configured in suricata.yaml
> >>> >
> >>> >
> >>> > If your drops are capture.kernel_drops, then :
> >>> > Have you read this article?
> >>> >
> http://christophe.vandeplas.com/2013/11/suricata-capturekerneldrops-caused-by.html
> >>> > Please do the first part "Confirmation of the problem" and see if you
> >>> > also have the problem caused by the lack of NIC queues.
> >>> > In a few words:
> >>> > - start suricata
> >>> > - as root, run  "top -H" and check how many AFPacketethXX threads are
> >>> > generating load.
> >>> > - if it's only one thread, then the problem has been pinpointed.
> >>> > However working with cluster_flow should solve this problem. Make
> sure
> >>> > you read the rest of the article then.
> >>> >
> >>> >
> >>> > Kind regards
> >>> > Christophe
> >>> >
> >>> >
> >>>
> >>>
> >>> Can you share your suricata.log as well please?
> >>> What is the output of
> >>> ethtool -k your_interface
> >>> ?
> >>
> >>
> >>
> >
>
>
>
> --
> Regards,
> Peter Manev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20140612/3caded82/attachment-0002.html>


More information about the Oisf-users mailing list