[Oisf-users] EXTERNAL: Inconsistent Alerting
Rasmor, Zachary R
zachary.r.rasmor at lmco.com
Mon Feb 22 17:07:05 UTC 2016
Yes, I have seen something similar with alerting when running against pcap files. I haven’t had a chance to dig further into it yet, but I flagged it as something to come back to.
In addition, I have also noticed this with luajit sigatures. As a sanity check, I ran Suricata against a pcap with 1 signature. The alert fired a number of times. Then I modified the signature to call a luajit script that simply returned 1 (indicating a match). I received a fraction the number of alerts (I did this a few weeks ago, so I don’t have the output handy, but I could recreate it if necessary).
My first thoughts are that this has to do with timing – the engine is killed once the end of the pcap file is reached, before all alerts have fired. But, as I said, I haven’t had a chance to confirm this in the code.
________________________
Zach Rasmor
Email: <mailto:zachary.r.rasmor at lmco.com> zachary.r.rasmor at lmco.com
Office: 301.240.6116
From: Oisf-users [mailto:oisf-users-bounces at lists.openinfosecfoundation.org] On Behalf Of derek_smithg at yahoo.com
Sent: Monday, February 22, 2016 11:30 AM
To: oisf-users at lists.openinfosecfoundation.org
Subject: EXTERNAL: [Oisf-users] Inconsistent Alerting
Hello everyone,
I am new to the mailing list. Sorry for the long email, I just wanted to include enough information in case anyone has come across a similar problem.
I have been running Suricata against several pcaps with different yaml configurations and am seeing the total count of alerts change from one run to another, or even with the same yaml but run at a different time. Has anyone come across anything similar before?
Below are details on the rules whose counts change and the yaml files I am using.
Suricata-2.0.11
CentOS 7.1.1503
Yaml Files:
- yaml1: very out of the box yaml, except moved address-groups and other locally defined variables to an include file.
- yaml2: changed max-pending-packets from 1024 to 645000, and detect-engine.profile from medium to high.
- threading1 and threading2: turned on cpu-affinity but set them to use [ ‘all’ ] cpu’s, with detect-ratio set to .5 and 1 respectively.
(all are only outputting eve.json)
I ran them against 3 pcaps of sizes roughly 100GB, 200GB, and 400GB, and tallied the alert counts, outputting any that were not the same across the board.
100 GB pcap
sid yaml1 yaml2 th1 th2
2001805 : 139 137 137 137
12037 : 1 - - -
2101633 : 132 129 129 129
200 GB pcap
2009375 : 91134 91127 91119 91122
2010935 : 657 657 662 657
400 GB pcap
2001805 : 96 96 96 99
2010935 : 818 818 880 818
2010937 : 160 160 192 160
2101633 : 138 136 138 137
They are mostly the same rules in common misbehaving. They are listed below.
emerging-chat.rules:alert tcp $HOME_NET any <> $EXTERNAL_NET any (msg:"ET CHAT ICQ Message"; flow: established; content:"|2A02|"; depth: 2; content:"|000400060000|"; offset: 6; depth: 6; reference:url,doc.emergingthreats.net/2001805; classtype:policy-violation; sid:2001805; rev:5;)
content-replace.rules:alert tcp $EXTERNAL_NET any -> $HOME_NET any (msg:"CONTENT-REPLACE AIM deny in-bound file transfer attempts"; flow:to_client,established; content:"*|02|"; depth:2; content:"|00 04 00 07|"; within:8; distance:4; content:"|09|F|13|CL|7F 11 D1 82 22|DEST|00|"; content:"DEST"; distance:-5; replace:"XXXX"; byte_test:2,=,0,-24,relative; classtype:policy-violation; sid:12037; rev:3;)
emerging-chat.rules:alert tcp $AIM_SERVERS any -> $HOME_NET any (msg:"GPL CHAT AIM receive message"; flow:to_client; content:"*|02|"; depth:2; content:"|00 04 00 07|"; depth:4; offset:6; classtype:policy-violation; sid:2101633; rev:7;)
emerging-chat.rules:alert tcp $HOME_NET any <> $EXTERNAL_NET any (msg:"ET CHAT General MSN Chat Activity"; flow: established; content:"Content-Type|3A|"; http_header; content:"application/x-msn-messenger"; http_header; reference:url,www.hypothetic.org/docs/msn/general/http_examples.php; reference:url,doc.emergingthreats.net/2009375; classtype:policy-violation; sid:2009375; rev:3;)
emerging-policy.rules:alert tcp $EXTERNAL_NET any -> $HOME_NET 1433 (msg:"ET POLICY Suspicious inbound to MSSQL port 1433"; flow:to_server; flags:S; threshold: type limit, count 5, seconds 60, track by_src; reference:url,doc.emergingthreats.net/2010935; classtype:bad-unknown; sid:2010935; rev:2;)
emerging-policy.rules:alert tcp $EXTERNAL_NET any -> $HOME_NET 3306 (msg:"ET POLICY Suspicious inbound to mySQL port 3306"; flow:to_server; flags:S; threshold: type limit, count 5, seconds 60, track by_src; reference:url,doc.emergingthreats.net/2010937; classtype:bad-unknown; sid:2010937; rev:2;)
This may be a different issue, but I have looked into 12037, which is very similar to 2101633 but with added replace and byte_test keywords, and think it might be a false positive. From carving out the ip’s involved with it from the pcap and running Suricata on that alone it hits that one alert about 50% of the time. I ran it once with alert-debug output and found the packet it’s supposedly alerting on and cannot find the byte pattern that would match to it.
Debug output regarding sid 12037:
STREAM DATA:
. . .
0030 02 00 01 00 05 00 04 47 BB 2B AC 00 0D 00 50 09 .......G .+....P.
0040 46 13 43 4C 7F 11 D1 82 22 44 45 53 54 00 00 09 F.CL.... "DEST...
. . .
0160 06 00 04 10 02 00 01 00 0D 00 50 09 46 13 43 4C ........ ..P.F.CL
0170 7F 11 D1 82 22 44 45 53 54 00 00 09 46 13 45 4C ...."DES T...F.EL
It is my understanding that byte_test is looking for 00 00, 24 bytes before 53 54 (because of content match “DEST” and “relative” keyword), but there is 00 04 in both.
Please let me know if you have any insight as to what is going on. I would greatly appreciate it.
Thank you,
Derek
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20160222/4dc0290f/attachment-0002.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 7804 bytes
Desc: not available
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20160222/4dc0290f/attachment-0002.bin>
More information about the Oisf-users
mailing list