[Oisf-users] New Post by OISF Board Member Randy Caldejon
Cooper F. Nelson
cnelson at ucsd.edu
Fri Oct 2 23:12:57 UTC 2015
-----BEGIN PGP SIGNED MESSAGE-----
On 10/2/2015 12:54 PM, Andreas Moe wrote:
> All those suggestions seem good, but they are moving fairly close to
> what SIEM solutions and data analysis / aggregation / coorelations
> systems should (in my mind) be handeling. Threat Intel such as domain /
> ip rep can change suddenly (say its lunch time, and you recive some new
> IOCs), and you dont want to reboot suricata (if the implementation would
> imply that was needed), and you would (as you often do) want to look
> retrospetivly for previous incidents on earlier unkown IOCs.
Suricata can replace a SIEM if you deploy it correctly. We don't even
have one (a SIEM) and I don't personally feel compelled to build/buy
one. Suricata + ETPRO together already provide more actionable alerts
than my team can triage effectively.
Additionally, a SIEM is not a good option if you operate like an ISP
(which we do) and do not have visibility into the majority (well over
99%) of devices on your network. Deep packet inspection is all we got.
Re: dynamically loading new rules, suricata already supports that:
kill -USR2 $(pgrep Suricata-Main)
More specifically what I'm looking is a way to write rules that
dynamically query an external service for domain reputation data, cache
the results and allow rules to be written against it. I particularly
want something that will alert on domains with no reputation data
(positive or negative).
> Keywords for time of day would be nice, but there would have to be alot
> of time put into how this was implemented, seing that many users are
> multinational, and spread over many timezones, but using one central
> signature management solution. Maybe if you had "alert-on:!work-hours"
> and work-hours was defined as a variable (in the same way as network
> variables) in the configuration.
That's pretty much what I'm already doing and shouldn't be hard to
> Tracking user agents would maybe be abit dificult, changes so much,
> length pattern matching (performance), and so easily / rapidly changing.
> PCRE matches against elements found in this
> list: http://www.useragentstring.com/pages/Browserlist/ would be very
> performance draining on a high speed link.
Bro can already do this (and it predates snort).
And you absolutely don't want to implement this with PCRE. You would
either want to use Aho-Corasick and just do a simple fixed-string match
against an array of user agents, or preferably keep a globally available
hash table of user agents and just store the keys for each host. This
would be preferable as then you could also detect new user-agents across
all hosts and would use less memory.
Ideally for all this stuff suricata should have the ability to maintain
state across multiple sessions, so it's not spamming alerts every time
it starts up.
> Sorry for sounding so negative, but i really like were this talk i
> going, the potential future + innovating ideas =)
I've been in the business twenty years and I don't think anything
described here is particularly difficult to implement. In fact, bro
already provides behavioral analysis, it just doesn't scale as easily as
suricata and can't leverage the layer-7 rules from vendors like
> Things im thinking of (and some of these are in play) are:
> - Integration with queue elements for output (say kafka) as well as
> input (maybe for domain/ip rep, as you mentioned, so that you can just
> "push" it out to the sensors)
Not sure what you mean here, the standard way to do this now is JSON ->
ELK, which is pretty much a standard now.
> - Multi tenancy (vlan / interface, seperate output logs)
Was going to disagree and suggest you would be better of doing this by
virtualization, but I can see some use cases where this would be preferable.
> - Periods of feature locks with focus on security testing and optimalization
OISF already does this with their dev/prod release cycle. Personally
I've found the dev release has always performed significantly better, as
the high-level algorithmic optimizations show up there first.
> - Integration with databased for direct output of say passive dns logging
See above, I think we've standardized on ELK for this.
> - Configuration profiling vs observed stats reporting and suricata
> performance (not really sure how, just an idea i had, just as we have
> packet profiling, say "user starts suricata with config-profiling, lots
> of defraged packet are found causing memcap reach, and similar events,
> packet drops and so on, and this would be reported back in a report).
They already have stats.log, which you can monitor with tail and fgrep.
That's how I've been tuning it.
Network Security Analyst
UCSD ACT Security Team
cnelson at ucsd.edu x41042
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.17 (MingW32)
-----END PGP SIGNATURE-----
More information about the Oisf-users