[Discussion] OSSIM and Sig Reliability
Martin Holste
mcholste at gmail.com
Sun Mar 1 18:32:54 UTC 2009
I disagree that this is out-of-scope. I think that this falls well within
the detection engine's purview, as evidenced by the fact that we've all been
using flowbits in Snort for a long time now. It's the detection engine's
job because we need to be able to write rules which have some correlative
properties, and as far as I know, we're writing the rules for the detection
engine. The extra properties that frameworks like OSSIM have allowed for
are intellectual property to be shared just like the Emerging Threats
detection rules. Sharing with the community that "pattern X is malicious"
is helpful, but it's even more helpful to share with the community that
"pattern X in context Y" is malicious.
In order to accomplish this, I think we want our engine to provide
contextual traffic immediately surrounding the incident. I find that the
Sguil-style full capture is by far the most effective, but I recognize that
it is impossible to do that in lots of environments for various technical
and legal reasons. Something like an improved version of Snort's "session"
and 'tag" keywords for rules would probably go a long way towards this
goal.
As an example, one of my favorite command line switches for the venerable
grep program is "-C <number>" which means return plus and minus n lines of
the file from the match. This establishes the context of the match in the
document you are searching so you have some idea as to what it meant in that
particular part of the doc. This is exactly the same requirement we have
when grepping network traffic, but most current tools provide no way of
seeing the traffic immediately preceding the sought event. This is
obviously because you have to record everything since you don't know ahead
of time what you'll be looking for. However, if you only save a few seconds
or minutes prior to the current time, this is not as daunting as it sounds.
As I've previously mentioned, Bro and Timemachine do this already, and they
do it very effectively. Timemachine writes traffic to RAM/hard disk and Bro
can query it to get some context. When Bro queries Timemachine for traffic
still in RAM, it gets the results in milleseconds. The queries are built
right into the Bro signatures. Our engine could have rules that specify
prior events right in the rule, which opens to door to a much more flexible
system and rules that can specify a dynamic event hierarchy. For instance:
Look for event X
If found, look for event Y in the traffic up to five seconds before event
X.
If found, begin looking for event Z for the hosts referred to in event X for
the next five minutes.
A real-life example would be something like this:
Look for the possible Trojan check-in URI pattern "x=digits&y=digits"
If found, check if the source host downloaded any EXE or PDF files in the
last five seconds.
If found, check to see if the source host makes any POSTs or requests any
EXE's for the next ten seconds.
Bro already allows for this, and if we're going to be creating a "next-gen"
detection engine, we need to at least match this capability. So from a
signature structure perspective, we're looking at a basic content match sig
which contains an array of signature ID's that should match prior traffic
and an array of signature ID's that should match subsequent traffic. So,
it's a lot like Snort's flowbits except you can specify boolean operators on
arrays of flowbits, instead of just one flowbit. Depending on how we make
the signature syntax, maybe you could even include the other sigs as partial
sigs inlined into the main sig so that everything is contained in just one
signature. Something like:
content:"this"; content: "that"; prior: 5 minutes; content "the other
thing"; subsequent: 30 seconds;
Or as references to other sigs:
content:"this"; sig_sid:1001; prior: 5 minutes; sig_sid:1002; subsequent: 30
seconds;
If the match for "that" is extremely common, then you wouldn't want to
search for it all the time, you'd only want to search for it when you know
something interesting has occurred, retroactively. Additionally, instead of
white and black matching, maybe these auto-adjust the signature's fidelity
rating:
content:this"; (sig_sid:1001; prior: 5 minutes; fidelity:+5) (sig_sid:1002;
subsequent: 30 seconds; fidelity: +10)
I think that it's important to put the fidelity modifiers right into the
signature instead of forcing everyone to figure that out on their own. If
you leave that information out of the signature, then it's much harder for
the community to contribute improvements to those properties. Naturally,
there will be org-specific modifications necessary like any rule, but if the
signature creator has the chance to say up front that the signature is a
better match in a given context, I think that's a major leap forward.
--Martin
On Sun, Mar 1, 2009 at 9:43 AM, Matt Jonkman <jonkman at jonkmans.com> wrote:
> I definitely like this too. But it's way into post processing. Out of
> our immediate scope.
>
> But what information could the engine provide to help the event manager
> make these decisions? Surely there's something it could help with?
>
> Maybe on the behavioral note the engine could start full captures or
> something when suspicious things happen so the analyst would immediately
> have more context?
>
> Matt
>
>
> > First of Kevin's ideas:
> >
> > Ossim (http://www.ossim.net/) has an interesting use of the reliability
> > of the sig, the priority of the host and some other things to assign a
> > risk to the attack. Using a similar system individual signatures can be
> > given a reliability which could mean sure fire attacks are flagged
> > immediately while unreliable signatures are not flagged immediately
> > until other factors are met. For instance under ossim you can basically
> > say (in an xml directive) if there are these snort sids it is a
> > reliability of 3 and if these snort sigs appear +2, if it persistent
> > (for a set time) +1 and if a web page error message appears +1 and so
> > on. Using such a system could mean false positives can automatically
> > lowered while making more reliable attacks against priority resources
> > and the events related to that attack available to the analyst (being
> > able to define the priority of an asset such as a server farm in
> > comparison so the secretary's desktop would be useful). Also things like
> > if the attack was blocked by IPS or even a firewall if the logs are
> > available that the attack was mitigated the risk level can be reduced.
> >
> >
>
> --
> --------------------------------------------
> Matthew Jonkman
> Emerging Threats
> Phone 765-429-0398
> Fax 312-264-0205
> http://www.emergingthreats.net
> --------------------------------------------
>
> PGP: http://www.jonkmans.com/mattjonkman.asc
>
>
> _______________________________________________
> Discussion mailing list
> Discussion at openinfosecfoundation.org
> http://lists.openinfosecfoundation.org/mailman/listinfo/discussion
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/discussion/attachments/20090301/b962c8a6/attachment-0002.html>
More information about the Discussion
mailing list