[Oisf-users] the challenge of IDS rules and your own db of maliciousness

Christophe Vandeplas christophe at vandeplas.com
Wed Jul 30 08:11:38 EDT 2014


On Wed, Jul 30, 2014 at 1:03 PM, Victor Julien <lists at inliniac.net> wrote:
> On 07/30/2014 12:55 PM, Christophe Vandeplas wrote:
>> Thanks to MISP (notice the hidden advertisement) and the information
>> sharing (about APTs) that happens more and more we are now slowly
>> sitting on a bigger and bigger repository of IOCs related to APTs.
>> Many of these are shared in private communities, but also many come
>> from OSINT reports published by various companies. (APT1 as an
>> example).
>>
>> Sitting on data is not enough, so that's why MISP generates exports in
>> various formats. From text, to csv, but also to Suricata rulesets.
>> (also snort)
>>
>> Now the problem I'm having is that this generates just to many rules
>> for suricata to handle. As an example: Yesterday morning I had a rule
>> file that contained 200 000 NIDS rules. This takes a huge amount of
>> time to load into suricata and is not very efficient.
>>
>> This huge amount of rules is caused by the variety of where the data
>> can be used. As an example a hostname generates 3 rules: http , dns
>> tcp, dns udp. Thanks to the Suricata protocol keywords this is reduced
>> to 2 rules: http, dns. (this reduced the tules to 140 000 rules). But
>> still it's too much for a NIDS.
>>
>> As for the content of these rules, a quick grep | sed | sort | count
>> magic gives me these counts:
>>   39 498 Domain
>>   88 198 Hostname
>>   11 573 IP
>>    2 816 URL
>>    (less) ... other
>>
>> There are multiple ways to reduce the number of rules loaded by the NIDS:
>> - expiration of IOCs: easy in theory, difficult in practice, but we're
>> working on this
>>
>> - splitting detection over multiple "IDS" sources:  LogIDS (email,
>> proxy, dns) and what the Log/SIEM does not see, load in the IDS.
>> (we're doing this, but very inefficiently) . But then again, you miss
>> things that did not use your proxy/relay server.
>
> Splitting traffic may defeat Suricata's protocol detection, as you're
> probably splitting on port (and ip's probably).

I'm not really sure you understood what I meant by 'splitting
detection'.  I meant: considering that you'll detect the evil stuff
thanks to automated loganalysis. So nothing to do with the network
traffic that Suricata sees. Everything that's covered by the proxies
should in theory not need to be re-applied on NIDS/suricata level, as
in theory no device can communicate directly to the internet without
passing through your proxy/mailrelay/...  Well, that's the theory of
course.

>> - applying different concepts within the IDS: like the IP
>> reputation/md5list that let's you load a file containing IOCs. However
>> importing hostnames and domainnames in
>
> Yeah, I would like to support this. In your case I think you have almost
> a 100% exact matches. For this hash lookups would be fine. The advanced
> rule logic isn't necessary. Then a single rule can be used, and using
> the json output we could add _what_ we matched on. Not supported
> currently, but I think this is the way forward.

That feature would be great. One small correction about the match. For
hostnames it's 100% match.
What we call domains is  foo.com and *.foo.com.

What I'm doing now is: (for dns, but the same applies to http)

- dns_query; content:"malicioushostname.com"; nocase; pcre:
"/(^|[^A-Za-z0-9-\.]) malicioushostname\.com$/i";
- dns_query; content:"mailliciousdomain.com"; nocase; pcre:
"/(^|[^A-Za-z0-9-]) maliciousdomain\.com$/i";
(the second regex could also be (^|\s|\.)foo.com$ , but I'm not sure
which one is the best)

I'll add a feature request on redmine then.

>> - bragging everywhere that you have a very valuable database, but not
>> using it in detection, which is kinda sad  ;-)
>
> Indeed.
>


More information about the Oisf-users mailing list