[Oisf-users] Suricata goes wild with SURICATA STREAM alerts
Peter Manev
petermanev at gmail.com
Sun Jul 24 00:41:38 UTC 2016
On Tue, Jul 12, 2016 at 8:46 PM, Marius <wishinet at gmail.com> wrote:
> Hi,
>
> I have capture problems with a Suricata 3.0.1. I'd appreciate some ideas on
> this.
>
> I suspect it has to do with my Suricata configuration. The traffic is sent
> to the sensor via an f5 clone pool.
> ( https://support.f5.com/kb/en-us/solutions/public/8000/500/sol8573.html ) .
>
> * The traffic is copied into an IDS VLan and received on an interface. The
> MAC address is rewritten.
> A packet looks like this:
> https://drive.google.com/file/d/0BwyhoK4VyctFWWN6LVlTdjdIT00/view
>
> * With the clone pools we can get SSL offloaded traffic from the LBs. Suri
> doesn't do SSL decryption.
>
> On the sensor my interface config is:
> enp17s0f1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
> inet 192.168.99.94 netmask 255.255.255.0 broadcast 192.168.99.255
>
> * The interface is not set in promiscuous mode, because it receives the
> traffic directly via the MAC.
>
> The rules, which indicate an error, are mostly stream engine related:
> SURICATA STREAM 3way handshake with ack in wrong dir [Classification:
> (null)]
> SURICATA STREAM ESTABLISHED packet out of window
> SURICATA STREAM ESTABLISHED invalid ack
> SURICATA STREAM Packet with invalid ack
> SURICATA STREAM FIN invalid ack
>
> * these alerts go wild
> * I also get valid alerts for TOR IPs and some XSS. However that is a
> fraction.
Some suggestions bellow:
During start (suricata.log) there seems to be some err -
12/7/2016 -- 21:39:26 - <Error> - [ERRCODE: SC_ERR_PCRE_MATCH(2)] -
pcre_exec parse error, ret -1, string , type threshold, ttack by_src,
count 5, seconds 60
that would need some investigation on the rules loaded side.
>
> * the stats.log:
> ------------------------------------------------------------------------------------
> Date: 7/12/2016 -- 19:22:01 (uptime: 0d, 00h 26m 35s)
> ------------------------------------------------------------------------------------
> Counter | TM Name |
> Value
> ------------------------------------------------------------------------------------
> capture.kernel_packets | Total |
> 49950854
> capture.kernel_drops | Total |
> 38149996
> decoder.pkts | Total |
> 11664420
> decoder.bytes | Total |
> 5930632578
> decoder.ipv4 | Total |
> 11664416
> decoder.ethernet | Total |
> 11664420
> decoder.tcp | Total |
> 11664416
> decoder.avg_pkt_size | Total | 508
> decoder.max_pkt_size | Total |
> 1566
> tcp.sessions | Total |
> 568197
> tcp.pseudo | Total |
> 29014
> tcp.syn | Total |
> 593612
> tcp.synack | Total |
> 522806
> tcp.rst | Total |
> 9906
> tcp.segment_memcap_drop | Total |
> 36066
> tcp.stream_depth_reached | Total | 24
> tcp.reassembly_gap | Total |
> 159155
> detect.alert | Total |
> 2041259
> flow_mgr.closed_pruned | Total |
> 104504
> flow_mgr.new_pruned | Total |
> 773441
> flow.spare | Total |
> 20560
> flow.tcp_reuse | Total |
> 2032
> tcp.memuse | Total |
> 52727808
> tcp.reassembly_memuse | Total |
> 2147483622
> http.memuse | Total |
> 222223958
> flow.memuse | Total |
> 96075664
>
>
> * My suspicion is that my config has a problem, because suri does not
> utilize memory or CPU a lot. The machine is almost idle.
> * Peak is 20 MB per second - nothing extra ordinary here
>
> I use af-packet in Suri:
>
> af-packet:
> - interface: enp17s0f1
> threads: 1
> cluster-id: 99
> cluster-type: cluster_flow
> defrag: yes
> # 12 GB, machine has 32 GB
> buffer-size: 12884901888
use ring-size (not buffer-size).
What is the reason for using only one thread for af-packet?
> disable-promisc: yes
> use-mmap: yes
> checksum-checks: auto
>
> defrag:
> max-frags: 65535
> prealloc: yes
> timeout: 120
>
> # had no effect
> vlan:
> use-for-tracking: false
>
Do you have vlans in the traffic? (sorry if i asked twice)
> stream:
> memcap: 4096mb
> # had no effect
> checksum-validation: yes # reject wrong csums
for a test try with -
checksum-validation: no
and see if that will have the same result or not.
> inline: no # no inline mode
> reassembly:
> memcap: 2048mb
> depth: 1mb # reassemble 1mb into a stream
> toserver-chunk-size: 2560
> toclient-chunk-size: 2560
> # I am unsure about these
> # midstream: true
> # async-oneside: true
> # also no effect:
> max-synack-queued: 10
>
> * suri produces PCAPs, and logs. The engine is stable. But somehow the cap
> (and match) processes don't work.
> I'm not sure where to look next.
>
>
Try getting a tcpdump from the traffic and review it with Wireshark as
well - see if anything unusual is in place.
--
Regards,
Peter Manev
More information about the Oisf-users
mailing list