<div dir="ltr"><div><div><div><div><div><div><div>With DNS if you haven't already I would also suggest using this:<br><br><a href="https://github.com/gamelinux/passivedns">https://github.com/gamelinux/passivedns</a><br>
<span name="Edward Fjellskål"><a href="http://www.alienvault.com/open-threat-exchange/blog/identifying-suspicious-domains-using-dns-records" target="_blank">www.alienvault.com/open-threat-exchange/blog/identifying-suspicious-domains-using-<span class="">dns</span>-records</a></span><br>
<span name="Edward Fjellskål"><span name="Edward Fjellskål"><a href="http://www.net-security.org/article.php?id=1844&p=2" target="_blank">http://www.net-security.org/article.php?id=1844&p=1</a>.<br><br></span></span></div>
<span name="Edward Fjellskål"><span name="Edward Fjellskål">I use it and I find its ability to query quickly via web interface or having a mysql database I can do other queries and try ideas on useful (such as newly seen suspicious domains etc). Also blacklists & regex support and as described in the last link provided I use this in combination with a SIEM to identify domain generation algorithms and it has proven reliable against Zeus and other DGA malware.<br>
<br></span></span></div><span name="Edward Fjellskål"><span name="Edward Fjellskål">While it may not be as verbose in tracing down where queries are coming from (especially if you can get a sensor in between your clients and DNS servers to get the source) I find it excellent for large scale analysis and queries. Also with the web interface I have Virustotal (which is excellent for passiveDNS as it links malware from or speaking to the address and other files), BFK and also my local PDNS as Snorby lookup sources and I find the local one exceptionally useful as when you have a query you can see all the domain names that we actually queried inside your network for this and then can quickly determine other linked IPs, domain names etc seen in your organisation. Also given that the database size in a large network with 30,000 users is relatively small for now nearly 2 months of data (500MB) so keeping large sets of historic data is possible to find later malicious domains. <br>
<br></span></span></div><span name="Edward Fjellskål"><span name="Edward Fjellskål">So far using this I have looked into using the data reliably to detect these things:<br></span></span></div><span name="Edward Fjellskål"><span name="Edward Fjellskål">- DGA malware<br>
</span></span></div><span name="Edward Fjellskål"><span name="Edward Fjellskål">- Other malware domains<br></span></span></div><span name="Edward Fjellskål"><span name="Edward Fjellskål">- Malicious domains (exploit kits, fake AVs etc).<br>
</span></span></div><div><span name="Edward Fjellskål"><span name="Edward Fjellskål">- Now looking into fast flux detection and also other ways of detecting malicious hosting infrastructure. <br></span></span></div><div><span name="Edward Fjellskål"><span name="Edward Fjellskål"><br>
</span></span></div><div><span name="Edward Fjellskål"><span name="Edward Fjellskål">I think once I have some kind of decent data reduction on the basic queries on the data already available I hoping to output that data from the database and do other automatic analysis on it to reduce that set further with other features (such as geoIP, creation dates etc).<br>
<br></span></span></div><div><span name="Edward Fjellskål"><span name="Edward Fjellskål">Oh and reading the academic papers from Damballa <a href="https://www.damballa.com/damballa-labs/publications.php">https://www.damballa.com/damballa-labs/publications.php</a> and openDNS/Umbrella Labs <a href="http://labs.umbrella.com/blog/">http://labs.umbrella.com/blog/</a> may help give you other ideas of using your DNS data to detect badness however you choose to collect your DNS data.<br>
<br></span></span></div><div><span name="Edward Fjellskål"><span name="Edward Fjellskål">Hope that helps you a bit,<br></span></span></div><div><span name="Edward Fjellskål"><span name="Edward Fjellskål">Kevin<br></span></span></div>
<span name="Edward Fjellskål"><span name="Edward Fjellskål"><br></span></span></div><div class="gmail_extra"><br><br><div class="gmail_quote">On 26 November 2013 08:49, Christophe Vandeplas <span dir="ltr"><<a href="mailto:christophe@vandeplas.com" target="_blank">christophe@vandeplas.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi list,<br>
<br>
<br>
In the past I've been using another tool to do DNS logging, and now<br>
I'd like to use Suricata for this. The format of the file is<br>
completely different, and also a part of the interpretation (Suricata<br>
is a LOT more verbose and complete)<br>
<br>
DNS logging of Suricata is mulitiple lines per DNS request (and<br>
response). So searching for things require multiple greps and<br>
filtering out duplicate ids.<br>
<br>
I'm wondering how others use this DNS logging.<br>
All stories (on or off-list) and practical use-cases are welcome.<br>
I'll do my best to document these on the wiki so that others can<br>
benefit from this info.<br>
<br>
As far as I understand there seem to be plans to transform the logging<br>
into json, is there already an idea about when that's to be expected?<br>
<br>
<br>
Thanks<br>
Kind regards<br>
Christophe<br>
_______________________________________________<br>
Suricata IDS Users mailing list: <a href="mailto:oisf-users@openinfosecfoundation.org">oisf-users@openinfosecfoundation.org</a><br>
Site: <a href="http://suricata-ids.org" target="_blank">http://suricata-ids.org</a> | Support: <a href="http://suricata-ids.org/support/" target="_blank">http://suricata-ids.org/support/</a><br>
List: <a href="https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users" target="_blank">https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>
OISF: <a href="http://www.openinfosecfoundation.org/" target="_blank">http://www.openinfosecfoundation.org/</a><br>
</blockquote></div><br></div>