[Oisf-users] Receive rate dropped
Jose Vila
jovimon at gmail.com
Thu Dec 18 12:52:09 UTC 2014
Hello Giuseppe,
I've seen a better behaviour, but still see a little decrease in the info
logged. I was having 800-1200 logs per second and then restarted suricata
and got 1700-2500 logs per second for a few minutes and then back again to
similar numbers than before the restart.
On Wed, Dec 17, 2014 at 6:25 PM, Jose Vila <jovimon at gmail.com> wrote:
>
> Hello Giuseppe,
>
> Thank you very much for your help.
>
> My PF_RING info:
> # cat /proc/net/pf_ring/info
> PF_RING Version : 6.0.3 ($Revision: 8707$)
> Total rings : 0
>
> Standard (non DNA/ZC) Options
> Ring slots : 4096
> Slot version : 16
> Capture TX : Yes [RX+TX]
> IP Defragment : No
> Socket Mode : Standard
> Total plugins : 0
> Cluster Fragment Queue : 0
> Cluster Fragment Discard : 0
>
> I just made the changes you commented, and had some issues:
>
> With 64k slots in pf_ring suricata doesn't start, and suricata.log shows
> errors not finding pf_ring.
>
> I've done some testing and found an acceptable value at 32767 for both
> pf_ring ring slots and default-packet-size (bigger values make my system
> unstable, I even had to restart twice).
>
> I'm leaving it and see how it behaves tomorrow at peak time.
>
> Just a couple questions:
>
> * Both values (ring slots and packet size) must be the same?
>
> * Which is the max packet size pf_ring is going to pass to Suricata?
>
> * Why was I experiencing this issue? Were the 4k queues flooded and the
> contents overwritten before they were processed, causing packet and alert
> loss even without any drop being detected?
>
> * Is there any document where you can "see" how this values impact in RAM
> usage?
>
> Regards,
>
> Jose Vila.
>
>
> On Wed, Dec 17, 2014 at 2:33 PM, Giuseppe Longo <giuseppelng at gmail.com>
> wrote:
>>
>> Hi Jose,
>> You may need to tune your configuration.
>>
>> Let's start from PF_RING:
>> # cat /proc/net/pf_ring/info
>>
>> If the ring slot value is 4096, try to increase it:
>> rmmod pf_ring
>> modprobe pf_ring transparent_mode=0 min_num_slots=65534
>>
>> Then adjust the default-packet-size value in suricata.yaml:
>> default-packet-size: 65535
>>
>>
>> Cheers,
>> Giuseppe
>>
>> 2014-12-17 12:47 GMT+01:00 Jose Vila <jovimon at gmail.com>:
>> > Hello,
>> >
>> > I just updated to Suricata 2.0.3 and PF_RING 6.0.3 from SVN, and this
>> > behaviour still persists.
>> >
>> > Can someone help?
>> >
>> > Thanks.
>> >
>> > On Tue, Dec 16, 2014 at 10:28 AM, Jose Vila <jovimon at gmail.com> wrote:
>> >>
>> >> Hello list,
>> >>
>> >> I'm moving from snort to Suricata, and I'm getting some problems.
>> >>
>> >> Before I had Snort 2.9.3.1 w/PF_RING 5.5.0, and had to pass parameter
>> >> "--daq-var no-kernel-filters=1" to Snort because the packet receive
>> rate was
>> >> slowly decreasing to the point of only 1/10 of the traffic being
>> processed
>> >> by Snort.
>> >>
>> >> Now with Suricata 2.0.3 and PF_RING 5.5.0 i'm seeing the same behaviour
>> >> ...
>> >>
>> >> If I count lines of log written to eve.json as Peter Manev does (see
>> [1]),
>> >> at suricata's start i get 2K-5K logs per second, but after a couple of
>> days
>> >> I only get 5-20 entries per second. Also, drop counters in stats.log
>> turned
>> >> from less than 0.1% to around 10%.
>> >>
>> >> Is there a way to pass this variable (no-kernel-filters) to PF_RING
>> >> through Suricata?
>> >>
>> >> Thanks,
>> >>
>> >> Jose Vila.
>> >>
>> >> [1]
>> >>
>> http://pevma.blogspot.com.es/2014/05/logs-per-second-on-evejson-good-and-bad.html
>> >
>> >
>> > _______________________________________________
>> > Suricata IDS Users mailing list: oisf-users at openinfosecfoundation.org
>> > Site: http://suricata-ids.org | Support:
>> http://suricata-ids.org/support/
>> > List:
>> https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>> > Training now available: http://suricata-ids.org/training/
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20141218/cfe41e11/attachment-0002.html>
More information about the Oisf-users
mailing list