[Discussion] Multi-tiered detection, and some feature suggestions

David J. Bianco david at vorant.com
Wed Oct 29 18:39:25 UTC 2008


In my organization, we're collecting lots of network forensic information to
support our intrusion analysis process.  In addition to IDS alerts, we
routinely collect network session information, full packet captures, network
asset info (OS and services) as well as a few other things.

At first, we really did just use these things to answer questions when
we were investigating IDS alerts, but over time we have turned these
additional sources of information into detection tools in their own right.

For example, our network session database is really good at detecting scans
and sweeps.  It can tell us when two hosts communicate that have never
communicated before, or when a box starts talking on a new port.  It can
even tell us when someone's using BitTorrent or similar P2P applications, or
when unusual amounts of data are being download to or exfiltrated from our
network.  We look for these things by running reports, so if our IDS can't
catch them in real-time, we can still have a good chance of detecting them
via offline analysis.

Similarly, our pcap data is a really good source to turn to when we need to
extract copies of transferred files.  The thing I'm most excited about for
purposes of this project, though, is that it allows us to look back in time
and apply what we know *now* to what happened *then*.  For example, if I
hear of a new vulnerability, and I write a Snort rule to detect it, I often
go through and run that rule retroactively against the stored traffic for
the last few days or weeks.  This is very effective in closing the window
between vulnerability discovery and signature availability.

Given these, I would like to suggest that a design goal of this new system
we're working on would be to allow for both online & offline detection of
security events.  I would also like to suggest that as new signatures become
known, the system should be able to retroactively check them against things
which have gone before.

In fact, I would like to go a step further.  One thing that Sguil, Bro and
pretty much every other analysis tools I've ever used fail at is helping
analysts detect patterns that might link multiple attacks to the same source.
Way back in February 2007, I wrote a blog post about how I'd like to see
a Web 2.0-style tagging feature implemented in Sguil:

		http://blog.vorant.com/2007/02/ip-tagging.html

I would like to put that here for discussion, but I'd also like to extend it
to include the capability of an analyst to flag "indicators" that might
tie an attacker's various attempts together.  For example, two attacks
that originate from the same IP might reasonably be considered related,
even if they are separated by days, weeks or months.  If we find two pieces
of malware exfiltrating data to the same DNS name, it's a good bet they're
under the same control.  Email addresses, attachement names, URL patterns...
there are probably a lot of different types of indicators I'm not even
considering.

I'd like to see a system that could keep track of indicators, tie them to
specific "incidents", and then flag additional uses of these same indicators,
either in future or historical occurrences, including situations where the
historical events weren't flagged as security alerts at the time.

Sound Crazy?  Well, it's not.  We're doing this manually now, and only for
a few cases where we think it's worth the time.  However, the fact that we
*can* do it manually implies that we can probably do it automatically, and
in my opinion, that's when things really start to get interesting!

	David





More information about the Discussion mailing list