[Oisf-users] (no subject)

Travel Factory S.r.l. mc8647 at mclink.it
Fri Mar 28 14:22:24 UTC 2014



> Judging from the output - your drops are minimal, less than ~%0.02

Yes, I noticed that... ifconfig has a drop ratio of about 0.04%. If 
you notice the capture.kernel_drops doesn't increase.

But my tests show that I can't filestore reliably.

I uploaded the startup messages to http://pastebin.com/CPye8Wjf
Please have a look at two points:
<Info> - AutoFP mode using default "Active Packets" flow load balancer
and
<Warning> - [ERRCODE: SC_ERR_AFP_READ(191)] - poll failed with retval 
0
Of the Warnings I sometimes get 0, 1, or more...
(http://pastebin.com/HQUjSc8x for the message when stopping suricata)

Can you please have a look at other logged values, if there is a hint 
that somehow a filestore is truncated... I don't know the meaning of 
the logged values, like these:
tcp.stream_depth_reached
detect.alert

Repeating in this moment my tests (40 wget of the same file) I get 26 
files stored ok, and the rest are partial...

# ll *.meta | wc -l
930
# grep -h STATE *.meta | sort | uniq -c
     365 STATE:             CLOSED
      46 STATE:             TRUNCATED

On 930 downloads that suricata started to filestore, only 365 were 
"correctly" closed (but I can't say that they were stored succesfully)
46 were flagged as truncated. What happened to the other 520 files?

Are these numbers/percentages expected ?

> I do not think there is a Suricata problem.

Don't know what to think.... at this point I think the only way is to 
take a specialized capture card and see if all packets are correctly 
forwarded to the server.
Last week I used tcpdump to save a pcap file and used suricata to read 
it: it could not create complete files.

What puzzle me more is that last week the server captured flawlessy 
for about 2 hours after changing a setting with ethtool. Stopping and 
restarting suricata changed something: I realized that irq affinity 
was gone.

Thanks
Francesco


> Some of the drops might very well be "legal" , tcp gaps and such.
> 
> 
> 
> 
> -- 
> Regards,
> Peter Manev




More information about the Oisf-users mailing list