[Oisf-users] Suricata Logs

Brandon Lattin latt0050 at umn.edu
Mon Jul 27 21:04:23 UTC 2015


This completely depends on the type of data going over the wire, as well as
the total bandwidth.

If you're just doing something like an ET Pro ruleset, with full packet eve
JSON, and no HTTP/DNS/etc, you're looking at a trivial amount of indexing
data to Splunk. If you're looking to do the rest (HTTP/DNS/flows), why not
use a separate Bro cluster?

But think of all the coorelation, aggregation and data-enrichment that
> these solutions are providing... None?
>

I'm not quite sure what you mean here by 'none'? Isn't that the whole point
of using something like ELK or Splunk? To have flexible environment to
build in that functionality?

On Mon, Jul 27, 2015 at 3:49 PM, Andreas Moe <moe.andreas at gmail.com> wrote:

> Well. 20 suricata engines (at different locations), will generate alot of
> logs. Sure, ELK (Elasticsearch, Logstash, Kibana) is free, and Splunk can
> index alot of stuff (as long as you pay). But i think that the problem here
> (seeing that you are talking about so many log locations) is now what
> technical solution to gather the logs. But think of all the coorelation,
> aggregation and data-enrichment that these solutions are providing... None?
> To futher demonstrate my point; How are you going to hande thousands and
> tens of thousands of events (http, dns, smtp, alerts) per second? What is
> relevant, what is not?
>
> TL;DR version: From one to a handfull of suricata instances with low
> bandwith you can manage this through a simple ELK/Splunk setup. After that,
> in my profesional oppinion something more has to be done. Ex: an MSSP or a
> SSP (kinda new, but Siem Solutions Provider).
>
> 2015-07-27 22:39 GMT+02:00 Leonard Jacobs <ljacobs at netsecuris.com>:
>
>>
>> https://redmine.openinfosecfoundation.org/projects/suricata/wiki/_Logstash_Kibana_and_Suricata_JSON_output
>>
>>
>>
>> https://github.com/pevma/Suricata-Logstash-Templates
>>
>>
>>
>> Or if you program and you want a customized application, you can write
>> code to enter fast.log into a database then write a front end to the
>> database to display the data.
>>
>>
>>
>> *From:* oisf-users-bounces at lists.openinfosecfoundation.org [mailto:
>> oisf-users-bounces at lists.openinfosecfoundation.org] *On Behalf Of *Saxena,
>> Samiksha
>> *Sent:* Monday, July 27, 2015 12:54 PM
>> *To:* oisf-users
>> *Subject:* [Oisf-users] Suricata Logs
>>
>>
>>
>> Hi,
>>
>>
>>
>> I will have more than 20 Suricata engines, where each suricata engine
>> will generate logs based on rules. I want to collect all the logs at one
>> common place from each suricata engine. How should I achieve this?
>>
>> Also, what is the value of the logs files and how often the logs are
>> generated?
>>
>>
>>
>>
>>
>> Thanks
>>
>> _______________________________________________
>> Suricata IDS Users mailing list: oisf-users at openinfosecfoundation.org
>> Site: http://suricata-ids.org | Support: http://suricata-ids.org/support/
>> List: https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>> Suricata User Conference November 4 & 5 in Barcelona:
>> http://oisfevents.net
>>
>
>
> _______________________________________________
> Suricata IDS Users mailing list: oisf-users at openinfosecfoundation.org
> Site: http://suricata-ids.org | Support: http://suricata-ids.org/support/
> List: https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
> Suricata User Conference November 4 & 5 in Barcelona:
> http://oisfevents.net
>



-- 
Brandon Lattin
Security Analyst
University of Minnesota - University Information Security
Office: 612-626-6672
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20150727/097a33eb/attachment-0002.html>


More information about the Oisf-users mailing list