[Oisf-users] [Emerging-Sigs] OISF Brainstorming Session Summary / Phase Three Draft Dev Roadmap
Martin Holste
mcholste at gmail.com
Sat Sep 24 15:14:07 UTC 2011
A few follow-up comments of my own from the conference on the roadmap:
> Bro
> The Bro team was present and extremely helpful, thanks to all! We learned a lot about our similarities and differences, and have identified a number of places where code could be shared, event data and even reputation data. We are resolved to pursue a much closer relationship with the Bro team and Bro itself, including exploring how Suricata and Bro can work together in realtime to share data and events. They are very complementary tools.
>
Indeed! As the only (remote) conference attendee that I know of who
runs Snort, Suricata, and Bro in production on a very large network, I
have a few thoughts about how Bro affects the roadmap, inline:
> Remote Attendance
Thanks again for that!
> SSL Analyzer: High Priority / Medium Resources required
> This module will be implemented in two phases. The first phase will do the following:
>
I strongly discourage this feature from being included in the
immediate roadmap because it is so completely covered by Bro, and
because the performance penalty for SSL processing is enormous in
modern enterprise networks. It will be complete wheel-reinventing of
a feature that is very well provided by a different federal grant
funded IDS project. I realize that Suricata, unlike Bro, would
provide a way to do inline filtering of SSL features, but the amount
of development effort it will take to properly implement this feature
will far outweigh the benefits and come at great opportunity cost for
other feature development which are lower-hanging fruit.
If inline SSL blocking is really a priority, then I encourage a
development of a full-featured ICAP proxy interface be developed
instead. That way, squid, or some other proxy, would intercept and
handle the SSL negotiation and defer to Suricata for allow/deny
decisions. That would allow for sophisticated decisions to be made
without the performance penalty, and it would provide a universal
interface for other types of blocking decisions for basic HTTP.
> Phase two will include the ability to decrypt sessions where keys are intercept-able, and the ability to provide private keys for local ssl relationships for decryption and analysis. We will consider the use of commodity crypto acceleration cards for this phase especially considering their reasonable cost.
>
This is way out of the current scope, and as above, why I recommend
focusing on what Suricata does well: efficient pattern matching and
flow analysis for decision making. Let the many freely available
traffic interceptors handle this work. I would hate for this feature
to get in the way of improved performance. For instance, while many
solutions deal with SSL, no other solutions use CUDA for pattern
matching. I would rather see resources focused on doing what no one
else does, rather than reinventing features already available to the
community.
> IP and DNS Reputation Distribution: High Priority / High Resources Required
This is one of the features that is fairly unique to Suricata (at
least for live traffic) and so I really encourage this one.
Specifically, Suricata has the fastest IP matcher that I know of--it's
one of its greatest strengths. Reputation allows Surciata to cash in
on this.
> We also need to look into using something like the Common Intelligence Framework (CIF), and other similar projects for transport of data.
More info on the excellent Collective Intelligence Framework can be
found here: http://code.google.com/p/collective-intelligence-framework/.
It will allow Suricata to query all blacklists at the same time with
one query, and it will greatly augment any current incident response
team's capability, with or without IDS integration.
> DNS Preprocessor and Anomaly Detection: High Priority / Medium Resources
> Hosts with significantly more frequent lookups than peers in their network.
> Hosts with lookups resulting in frequent low TTL responses
> Domains that resolve to different IP addresses frequently
> Possible analysis of variance in DNS queries for the same domain (potential covert channels)
> Very regularly timed queries for the same domain name.
>
I really think that the group has overestimated the value of the above
"anomalies" which are exhibited already by almost half of the hosts
visited on a normal network. Queries to Amazon, Google, and Akamai,
to name a few, will have many queries with low TTL responses to
domains that change IP's frequently and have great variance in the
queries themselves. Also, any legitimate software (Symantec, for
example), will make very regular DNS queries and do odd things with
TXT records. In short, this isn't the low-hanging fruit you're
looking for in a Suricata feature. Compared with IP reputation, this
will require a lot of work to implement, hurt performance, and provide
little value.
> GEO IP: High Priority / Low Resources
> This module will use a geo-ip database such as Maxmind to allow geolocation of IP addresses.
>
This would be awesome. While GeoIP is only about 80% accurate, it's
still helpful, and because MaxMind has such an easy to use C library,
this can be very easily implemented without hurting performance. This
would also be a feature unique (mostly) to Suricata, extending the
community IDS capabilities.
> Live Ruleset Swapping: High Priority / Medium Resources
This would be handy, but even a SIGHUP to reload (without live
swapping) would be a nice feature add.
Thanks again for having the remote attendance available and the great
discussion at the conference.
More information about the Oisf-users
mailing list