[Oisf-users] Getting 'nf_queue: full' messages - how to increase ?

Morgan Cox morgancoxuk at gmail.com
Fri Aug 27 08:35:20 UTC 2010


Thanks Victor.

I was going to mention that the mentioned proc folders didn't exist on my
system.

I have changed it to 100 max-pending-packets and that has doubled the queue
size.

Thank you




On 27 August 2010 07:55, Victor Julien <victor at inliniac.net> wrote:

> Pablo wrote:
> > I'm not sure, but maybe it's related to the value at "
> > /proc/sys/net/nf_conntrack_max " or " /proc/sys/net/netfilter/nf_
> > conntrack_buckets "
> > You can increase this values with for example
> > echo "123456" > /proc/sys/net/nf_conntrack_max
> > If not, maybe you can try to search that limit value of 200 with..
> > find /proc/sys/net/ -name "*conntrack*" -exec echo {} \; -exec grep 200
> > {} \;
> > Anyway, 200 entries by default seems to be a low value.
> >
> > You may also want to enable/increase the value of max-pending-packets at
> > suricata.yaml
> > Let us know if you find out a solution.
>
> Increasing the max-pending-packets setting will automagically increase
> the nfq buffer sizes Suricata sets, so that would probably be a good
> solution.
>
> Suricata gives the following info at startup about nfq buffer sizes:
>
> [4053] 27/8/2010 -- 08:54:42 - (source-nfq.c:267) <Info> (NFQInitThread)
> -- binding this thread to queue '0'
> [4053] 27/8/2010 -- 08:54:42 - (source-nfq.c:291) <Info> (NFQInitThread)
> -- setting queue length to 200
> [4053] 27/8/2010 -- 08:54:42 - (source-nfq.c:304) <Info> (NFQInitThread)
> -- setting nfnl bufsize to 300000
>
> I don't think the conntrack values are related to nf_queue.
>
> Cheers,
> Victor
>
> --
> ---------------------------------------------
> Victor Julien
> http://www.inliniac.net/
> PGP: http://www.inliniac.net/victorjulien.asc
> ---------------------------------------------
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20100827/c631e29e/attachment-0002.html>


More information about the Oisf-users mailing list