<br><br><div class="gmail_quote">On Thu, Dec 6, 2012 at 1:18 PM, Christophe Vandeplas <span dir="ltr"><<a href="mailto:christophe@vandeplas.com" target="_blank">christophe@vandeplas.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb"><div class="h5">On Thu, Dec 6, 2012 at 12:58 PM, Peter Manev <<a href="mailto:petermanev@gmail.com">petermanev@gmail.com</a>> wrote:<br>
><br>
><br>
> On Thu, Dec 6, 2012 at 12:26 PM, Christophe Vandeplas<br>
> <<a href="mailto:christophe@vandeplas.com">christophe@vandeplas.com</a>> wrote:<br>
>><br>
>> trying to reply to all the questions, also from Anoop.<br>
>><br>
>> On Thu, Dec 6, 2012 at 11:55 AM, Peter Manev <<a href="mailto:petermanev@gmail.com">petermanev@gmail.com</a>> wrote:<br>
>> > Hi Cristophe,<br>
>> ><br>
>> > sorry - i missed the info from you.<br>
>> > Ok HW is definitely enough for that traffic.<br>
>> ><br>
>> > Do you use af_packet?<br>
>><br>
>> no, I'll activate it on this IDS by using the eth2 interface only.<br>
>> Fortunately that's an IDS where the bond0 was not really necessary,<br>
>> but we prefer to keep every IDS as identical as possible. I'll have to<br>
>> dig into the AF_PACKET documentation to understand how I should<br>
>> configure it to receive on two physical interfaces.<br>
>><br>
>> > Is Suriata running on all 8 cores?<br>
>><br>
>> yep, on every machine it uses CPU from all cores.<br>
>><br>
>> > bond0 interface - is that bridged by any chance?<br>
>><br>
>> nope, that is/was not bridged. As I just switched to direct interface<br>
>> usage with AF_PACKET to eth2. This is not relevant anymore.<br>
>><br>
>> /etc/network/interfaces is<br>
>> auto eth2<br>
>> iface eth2 inet manual<br>
>> pre-up ifconfig $IFACE up promisc<br>
>> post-down ifconfig $IFACE down<br>
>> bond-master bond0<br>
>><br>
>> # bonding interfaces for easier sniffing<br>
>> auto bond0<br>
>> iface bond0 inet manual<br>
>> pre-up ifconfig $IFACE up promisc<br>
>> post-down ifconfig $IFACE down<br>
>> bond-mode balance-rr<br>
>> bond-miimon 100<br>
>> bond-slaves none<br>
>><br>
>><br>
>> > Do you have checksums enabled or disabled?<br>
>><br>
>> enabled (as shown below)<br>
>><br>
>> > FlowTimeout values - you should try to lower them.<br>
>><br>
>> ok,<br>
>><br>
>> > Can you describe the ruleset you're using?<br>
>><br>
>> 44538 signatures processed. 711 are IP-only rules, 43495 are<br>
>> inspecting packet payload, 13901 inspect application layer, 0 are<br>
>> decoder event only<br>
><br>
> do i read this correctly - 44K rules? :)<br>
<br>
</div></div>yep :-) we mainly use privately-shared lists of dns names/ips, ...<br>
of targeted attacks.<br>
Hostnames generate http, dns (udp/tcp) rules, so it grows quite fast)<br>
<div class="im"><br>
> But more importantly - which Suriacta ver are you using?<br>
<br>
</div>Suricata 1.3.4 from the ubuntu ppa repo.<br></blockquote><div>I am glad to hear that you are using PPA :)<br>you should/could also give 1.4rc1 a try - located in our suricata-beta repository <br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
New config is reloaded with suricata -D --pidfile<br>
/var/run/suricata.pid -c /etc/suricata/suricata.yaml --af-packet=eth2<br>
--runmode=workers<br>
raised flow.memcap to 3gb (was 2gb)<br>
raised stream.memcap to 3gb (was 2gb)<br>
lowered stream.reassembly.memcap to 1mb (default)<br>
<br>
So that 6 GB of the 8 GB, that gives 2 GB for the rest of suri and the<br>
OS. Is that a correct interpretation?<br>
<br>
<br>
>From the new stats (<a href="http://pastebin.com/HNWRUBEM" target="_blank">http://pastebin.com/HNWRUBEM</a> ) I see that the<br>
tcp.reassembly_gap is raising quickly (997)<br>
There are no drops<br>
capture.kernel_drops | AFPacketeth21 | 0<br>
tcp.ssn_memcap_drop | AFPacketeth21 | 0<br>
tcp.segment_memcap_drop | AFPacketeth21 | 0<br>
<br>
But this seems weird, the decoder sees less packets (272) than the<br>
kernel. While the kernel reports no drops. Or that's perhaps because<br>
it's somewhere in between?<br>
capture.kernel_packets | AFPacketeth21 | 948484<br>
capture.kernel_drops | AFPacketeth21 | 0<br>
decoder.pkts | AFPacketeth21 | 948756<br>
<br>
<br>
<br>
<br>
<br>
At startup suri now says:<br>
6/12/2012 -- 13:04:11 - <Info> - 11 rule files processed. 43201 rules<br>
succesfully loaded, 60 rules failed<br>
6/12/2012 -- 13:05:49 - <Info> - 44538 signatures processed. 711 are<br>
<div class="im">IP-only rules, 43495 are inspecting packet payload, 13901 inspect<br>
application layer, 0 are decoder event only<br>
</div>6/12/2012 -- 13:05:49 - <Info> - building signature grouping<br>
structure, stage 1: adding signatures to signature source addresses...<br>
complete<br>
6/12/2012 -- 13:05:50 - <Info> - building signature grouping<br>
structure, stage 2: building source address list... complete<br>
6/12/2012 -- 13:06:16 - <Info> - building signature grouping<br>
structure, stage 3: building destination address lists... complete<br>
6/12/2012 -- 13:06:29 - <Info> - Threshold config parsed: 0 rule(s) found<br>
6/12/2012 -- 13:06:29 - <Info> - Core dump size set to unlimited.<br>
6/12/2012 -- 13:06:29 - <Info> - fast output device (regular)<br>
initialized: fast.log<br>
6/12/2012 -- 13:06:29 - <Info> - Unified2-alert initialized: filename<br>
unified2.alert, limit 32 MB<br>
6/12/2012 -- 13:06:29 - <Info> - http-log output device (regular)<br>
initialized: http.log<br>
6/12/2012 -- 13:06:29 - <Info> - Using round-robin cluster mode for<br>
AF_PACKET (iface eth2)<br>
6/12/2012 -- 13:06:29 - <Info> - Enabling mmaped capture on iface eth2<br>
6/12/2012 -- 13:06:29 - <Info> - Going to use 1 thread(s)<br>
6/12/2012 -- 13:06:29 - <Info> - RunModeIdsAFPSingle initialised<br>
6/12/2012 -- 13:06:29 - <Info> - stream "max-sessions": 262144<br>
6/12/2012 -- 13:06:29 - <Info> - stream "prealloc-sessions": 32768<br>
6/12/2012 -- 13:06:29 - <Info> - stream "memcap": 3221225472<br>
6/12/2012 -- 13:06:29 - <Info> - stream "midstream" session pickups: disabled<br>
6/12/2012 -- 13:06:29 - <Info> - stream "async-oneside": disabled<br>
6/12/2012 -- 13:06:29 - <Info> - stream "checksum-validation": enabled<br>
6/12/2012 -- 13:06:29 - <Info> - stream."inline": disabled<br>
6/12/2012 -- 13:06:29 - <Info> - Enabling zero copy mode<br>
6/12/2012 -- 13:06:29 - <Info> - stream.reassembly "memcap": 1073741824<br>
6/12/2012 -- 13:06:29 - <Info> - stream.reassembly "depth": 1048576<br>
6/12/2012 -- 13:06:29 - <Info> - stream.reassembly "toserver-chunk-size": 2560<br>
6/12/2012 -- 13:06:29 - <Info> - stream.reassembly "toclient-chunk-size": 2560<br>
6/12/2012 -- 13:06:29 - <Info> - AF_PACKET RX Ring params:<br>
block_size=32768 block_nr=52 frame_size=1584 frame_nr=1040<br>
6/12/2012 -- 13:06:30 - <Info> - all 1 packet processing threads, 3<br>
management threads initialized, engine started.<br></blockquote><div>you mention you have 8 cores. Did you configure 8 threads for AF_PACKET in the yaml section?<br><table class="filecontent syntaxhl"><tbody><tr><td class="line-code">
<pre><font size="4">af-packet:
</font></pre>
</td>
</tr>
<tr>
<td class="line-code">
<pre><font size="4"> - interface: eth0
</font></pre>
</td>
</tr>
<tr>
<td class="line-code">
<pre><font size="4"> # Number of receive threads (>1 will enable experimental flow pinned
</font></pre>
</td>
</tr>
<tr>
<td class="line-code">
<pre><font size="4"> # runmode)
</font></pre>
</td>
</tr>
<tr>
<td class="line-code">
<pre><font size="4"> threads: 1
</font></pre></td></tr></tbody></table><br>you could try changing threads:1 to threads:8 <br>that however should not be an issue in this case ...since your traffic is only 15Mbs<br><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb"><div class="h5"><br>
<br>
<br>
>><br>
>><br>
>> the ruleset is very simple with tcp, http and udp filters. Nothing<br>
>> really spectacular.<br>
>> I wouldn't expect the ruleset to be a problem because CPU load is very<br>
>> very low. (even on the 130Mbps IDS it's only at 150-180% of the 800%<br>
>> available)<br>
>><br>
>><br>
>> I'll re-read what Victor said and will continue hunting for the cause.<br>
>> Thanks for all these fast replies !<br>
>><br>
>> Christophe<br>
>><br>
>> ><br>
>> > thank you<br>
>> ><br>
>> ><br>
>> > On Thu, Dec 6, 2012 at 11:40 AM, Christophe Vandeplas<br>
>> > <<a href="mailto:christophe@vandeplas.com">christophe@vandeplas.com</a>> wrote:<br>
>> >><br>
>> >> On Thu, Dec 6, 2012 at 11:21 AM, Peter Manev <<a href="mailto:petermanev@gmail.com">petermanev@gmail.com</a>><br>
>> >> wrote:<br>
>> >> > Hi,<br>
>> >> ><br>
>> >> > what (how much) traffic do you average?<br>
>> >><br>
>> >> Hello Peter,<br>
>> >><br>
>> >> That was written in my mail, one of the IDSses sees only 15Mbps during<br>
>> >> the day on average. Spikes up to 40Mbps (but very short spikes 4 times<br>
>> >> a day). That should certainly be feasible with such a system.<br>
>> >><br>
>> >> Once I get that IDS working fine I'll finetune the settings of the<br>
>> >> others. (150 Mbps and 80 Mbps on average during the day)<br>
>> >><br>
>> >><br>
>> >> > On Thu, Dec 6, 2012 at 11:17 AM, Christophe Vandeplas<br>
>> >> > <<a href="mailto:christophe@vandeplas.com">christophe@vandeplas.com</a>> wrote:<br>
>> >> >><br>
>> >> >> Hello,<br>
>> >> >><br>
>> >> >><br>
>> >> >> Almost all my IDSses are having<br>
>> >> >> tcp.segment_memcap_drop<br>
>> >> >> tcp.reassembly_gap<br>
>> >> >><br>
>> >> >> And some of them have<br>
>> >> >> tcp.ssn_memcap_drop<br>
>> >> >><br>
>> >> >> I have been playing around with the memory settings in suricata, but<br>
>> >> >> I<br>
>> >> >> must admit it still looks very unclear to me, any help would really<br>
>> >> >> be<br>
>> >> >> appreciated.<br>
>> >> >><br>
>> >> >> To attack this problem I'm now concentrating my efforts on the IDS<br>
>> >> >> dealing with the least traffic: during the day average of 15 Mbps.<br>
>> >> >> The IDS has 8 virtual-cores (4-core + ht = 8 ), and 8 GB of ram. And<br>
>> >> >> is sniffing using -i on a bond0 interface.<br>
>> >> >><br>
>> >> >> The stats file is here: <a href="http://pastebin.com/kSVFDHRM" target="_blank">http://pastebin.com/kSVFDHRM</a><br>
>> >> >><br>
>> >> >><br>
>> >> >> Outputs that are on: fast, unified2, http, stats, syslog.<br>
>> >> >> I did not change anything in the threading section.<br>
>> >> >> Defrag is also default:<br>
>> >> >> defrag:<br>
>> >> >> max-frags: 65535<br>
>> >> >> prealloc: yes<br>
>> >> >> timeout: 60<br>
>> >> >><br>
>> >> >> Raised flow:<br>
>> >> >> flow:<br>
>> >> >> memcap: 2gb<br>
>> >> >> hash-size: 65536<br>
>> >> >> prealloc: 10000<br>
>> >> >> emergency-recovery: 30<br>
>> >> >> prune-flows: 5<br>
>> >> >><br>
>> >> >> Flow-timeouts are default, and I raised stream memcaps:<br>
>> >> >> stream:<br>
>> >> >> memcap: 2gb<br>
>> >> >> checksum-validation: yes # reject wrong csums<br>
>> >> >> inline: no # no inline mode<br>
>> >> >> reassembly:<br>
>> >> >> memcap: 1gb<br>
>> >> >> depth: 8mb # reassemble 1mb into a stream<br>
>> >> >> toserver-chunk-size: 2560<br>
>> >> >> toclient-chunk-size: 2560<br>
>> >> >><br>
>> >> >><br>
>> >> >> Any advice to further finetune is welcome !<br>
>> >> >><br>
>> >> >> Thanks a lot<br>
>> >> >> Christophe<br>
>> >> >> _______________________________________________<br>
>> >> >> Suricata IDS Users mailing list:<br>
>> >> >> <a href="mailto:oisf-users@openinfosecfoundation.org">oisf-users@openinfosecfoundation.org</a><br>
>> >> >> Site: <a href="http://suricata-ids.org" target="_blank">http://suricata-ids.org</a> | Support:<br>
>> >> >> <a href="http://suricata-ids.org/support/" target="_blank">http://suricata-ids.org/support/</a><br>
>> >> >> List:<br>
>> >> >> <a href="https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users" target="_blank">https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>
>> >> >> OISF: <a href="http://www.openinfosecfoundation.org/" target="_blank">http://www.openinfosecfoundation.org/</a><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> > --<br>
>> >> > Regards,<br>
>> >> > Peter Manev<br>
>> >> ><br>
>> ><br>
>> ><br>
>> ><br>
>> ><br>
>> > --<br>
>> > Regards,<br>
>> > Peter Manev<br>
>> ><br>
><br>
><br>
><br>
><br>
> --<br>
> Regards,<br>
> Peter Manev<br>
><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div>Regards,</div>
<div>Peter Manev</div><br>