<div dir="ltr">I may be overthinking this, so let me know if there is a better way. I tried connecting gdb to the thread and running 'print *l', but the binary had no debug symbols. Therefore, I rebuilt my suricata .deb with --enable-debug (from Victor's blog: <a href="http://blog.inliniac.net/2010/01/04/suricata-debugging/">http://blog.inliniac.net/2010/01/04/suricata-debugging/</a>) and installed the package. I then copied suricata-src/src/.libs/suricata & libhtp/htp/.libs/libhtp-0.5.11.so.1.0.0 (the unstripped files) over the installed binary and started suricata. I run with:<div>
<br></div><div>SC_LOG_LEVEL=None SC_LOG_OP_FILTER="stream" suricata --user sguil --group sguil -c /etc/nsm/hera-na0-eth1/suricata.yaml --pfring=eth1 -F /etc/nsm/hera-na0-eth1/bpf-ids.conf -l /nsm/sensor_data/hera-na0-eth1 > /dev/null 2>&1<br>
</div><div><br></div><div>(without the redirect to /dev/null I get a deluge of htp* output as it inspects my traffic). Testing out gdb on the process now, before any Detect thread is pegged:</div><div><br></div><div># gdb suricata 9812<br>
</div><div><div>GNU gdb (Ubuntu/Linaro 7.4-2012.04-0ubuntu2.1) 7.4-2012.04</div><div>...</div><div>Reading symbols from /usr/bin/suricata...done.</div><div>Attaching to program: /usr/bin/suricata, process 9812</div><div><br>
</div><div>warning: process 9812 is a cloned process</div><div><div>Reading symbols from /usr/lib/libhtp-0.5.11.so.1...done.</div><div>Loaded symbols for /usr/lib/libhtp-0.5.11.so.1</div></div><div>Reading symbols from /usr/lib/x86_64-linux-gnu/libluajit-5.1.so.2...(no debugging symbols found)...done.</div>
<div>Loaded symbols for /usr/lib/x86_64-linux-gnu/libluajit-5.1.so.2</div><div>...</div><div>libraries with no debug symbols</div><div>...</div><div>Reading symbols from /lib/x86_64-linux-gnu/libnss_files.so.2...(no debugging symbols found)...done.</div>
<div>Loaded symbols for /lib/x86_64-linux-gnu/libnss_files.so.2</div><div>0x00007f58bd8fed84 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0</div></div><div><div>(gdb) print *l</div><div>No symbol "l" in current context. <--- What am I doing wrong here?</div>
<div>(gdb) bt</div><div>#0 0x00007f58bd8fed84 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0</div><div>#1 0x00000000005bb58c in TmqhInputFlow (tv=<optimized out>) at tmqh-flow.c:93</div>
<div>#2 0x00000000005c290f in TmThreadsSlotVar (td=0x64186d0) at tm-threads.c:810</div><div>#3 0x00007f58bd8fae9a in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0</div><div>#4 0x00007f58bd1c93fd in clone () from /lib/x86_64-linux-gnu/libc.so.6</div>
<div>#5 0x0000000000000000 in ?? ()</div></div><div><br></div><div>So, what am I missing with the 'print *l'?</div><div><br></div><div>-dave</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Fri, Jun 20, 2014 at 8:11 AM, Anoop Saldanha <span dir="ltr"><<a href="mailto:anoopsaldanha@gmail.com" target="_blank">anoopsaldanha@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Dave,<br>
<br>
Can you stick gdb into the cycle gobbling Detect thread(which is stuck<br>
inside the libhtp list call), and do a "print *l" and post back the<br>
results here. That should clear things up.<br>
<div class="HOEnZb"><div class="h5"><br>
On Fri, Jun 20, 2014 at 6:29 PM, David Vasil <<a href="mailto:davidvasil@gmail.com">davidvasil@gmail.com</a>> wrote:<br>
> It is not always the Detect1 thread, though it was in that case. As I write<br>
> this, my Detect6 thread has been running at 100% CPU utilization for about<br>
> 30 minutes while the other 7 threads are between 0% and 5% -- no dropped<br>
> packets reported though. I decreased my stream.reassembly.depth from 4mb to<br>
> 2mb and have seen a drop in the frequency of Detect threads hitting 100%<br>
> utilization (though my tcp.stream_depth_reached has increased, as expected I<br>
> guess).<br>
><br>
> Now that I decreased the the stream.reassembly.depth, I saw instances of my<br>
> FlowManagerThread hit 100% CPU utilization, during which time my<br>
> flow_mgr.*pruned counters stopped increasing for the duration and when<br>
> FlowManagerThread finished doing whatever it was doing there was a large<br>
> spike in those counters. No other counters on the system seemed affected.<br>
> I had increased the size of the flow.hash-size to 131072, but have since<br>
> reverted it to the default as that did not seem to decrease the dropped<br>
> packets.<br>
><br>
> Using perf top I still see libhtp (htp_list_array_get) consuming a majority<br>
> of the event cycles on my system -- close to 20% overall and about 80% of<br>
> the cycles in the Suricata-Main thread. Is this something that is specific<br>
> to my configuration or are others seeing similar libhtp utilization? Maybe<br>
> Anoop is on to something?<br>
><br>
> A couple of perf top screenshots attached. Thanks!<br>
><br>
> stats.log: <a href="http://pastebin.com/13j0DV7E" target="_blank">http://pastebin.com/13j0DV7E</a><br>
> suricata.yaml: <a href="http://pastebin.com/QAib5dYZ" target="_blank">http://pastebin.com/QAib5dYZ</a><br>
><br>
> -dave<br>
><br>
><br>
> On Thu, Jun 19, 2014 at 5:31 AM, Victor Julien <<a href="mailto:lists@inliniac.net">lists@inliniac.net</a>> wrote:<br>
>><br>
>> On 06/18/2014 04:54 PM, David Vasil wrote:<br>
>> > I have been trying to track down an issue I am having with Suricata<br>
>> > dropping packets (seems to be a theme on this list), requiring a restart<br>
>> > of the daemon to clear the condition. My environment is not large<br>
>> > (averge 40-80Mbps traffic, mostly user/http traffic) and I have Suricata<br>
>> > 2.0.1 running on a base installation of Security Onion 12.04.4 on a Dell<br>
>> > R610 (12GB RAM, Dual Intel X5570, Broadcom BCM5709 sniffing interface).<br>
>> ><br>
>> > About once a day, Zabbix shows that I am starting to see a large number<br>
>> > of capture.kernel_drops and some corresponding tcp.reassembly_gap.<br>
>> > Looking at htop, I can see that one of the Detect threads (Detect1 in<br>
>> > this screenshot) is pegged at 100% utilization. If I use 'perf top' to<br>
>> > look at the perf events on the system, I see libhtp consuming a large<br>
>> > number of the cycles (attached). Restarting suricata using<br>
>> > 'nsm_sensor_stop --only-snort-alert' results in child threads exiting,<br>
>> > but the main suricata process itself never stops (requiring a kill -9).<br>
>> > Starting suricata again with 'nsm_sensor_start --only-snort-alert'<br>
>> > starts up Suricata and shows that we are able to inspect traffic with no<br>
>> > drops.<br>
>> ><br>
>> > In the attached screenshots, I am only inspecting ~2k packets/sec<br>
>> > ~16Mbit/s when Suricata started dropping packets. As I write this,<br>
>> > Suricata is processing ~7k packets/sec and ~40Mbit/s with no drops. I<br>
>> > could not see anything that I can directly correlate to the drops and<br>
>> > the various tuning steps I have taken have not helped alleviate the<br>
>> > issue, so I was hoping to leverage the community's wisdom.<br>
>> ><br>
>> > Some observations I had:<br>
>> ><br>
>> > - Bro (running on the same system, on the same interface) drops 0%<br>
>> > packets without issue all day<br>
>> > - When I start seeing capture.kernel_drops, I also begin seeing an<br>
>> > uptick in flow_mgr.new_pruned and tcp.reassembly_gap, changing the<br>
>> > associated memcaps of each has not seemed to help<br>
>> > - tcp.reassembly_memuse jumps to a peak of around 2.66G even though my<br>
>> > reassembly memcap is set to 2gb<br>
>> > - http.memcap is set to 256mb in my config and logfile, but the<br>
>> > stats.log show http.memcap = 0 (bug?)<br>
>><br>
>> When this happens, do you see a peak in syn/synack and flow manager<br>
>> pruned stats each time?<br>
>><br>
>> The current flow timeout code has a weakness. When it injects fake<br>
>> packets into the engine to do some final processing, it currently only<br>
>> injects into Detect1. You might be seeing this here.<br>
>><br>
>> --<br>
>> ---------------------------------------------<br>
>> Victor Julien<br>
>> <a href="http://www.inliniac.net/" target="_blank">http://www.inliniac.net/</a><br>
>> PGP: <a href="http://www.inliniac.net/victorjulien.asc" target="_blank">http://www.inliniac.net/victorjulien.asc</a><br>
>> ---------------------------------------------<br>
>><br>
>> _______________________________________________<br>
>> Suricata IDS Users mailing list: <a href="mailto:oisf-users@openinfosecfoundation.org">oisf-users@openinfosecfoundation.org</a><br>
>> Site: <a href="http://suricata-ids.org" target="_blank">http://suricata-ids.org</a> | Support: <a href="http://suricata-ids.org/support/" target="_blank">http://suricata-ids.org/support/</a><br>
>> List: <a href="https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users" target="_blank">https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>
>> OISF: <a href="http://www.openinfosecfoundation.org/" target="_blank">http://www.openinfosecfoundation.org/</a><br>
><br>
><br>
><br>
> _______________________________________________<br>
> Suricata IDS Users mailing list: <a href="mailto:oisf-users@openinfosecfoundation.org">oisf-users@openinfosecfoundation.org</a><br>
> Site: <a href="http://suricata-ids.org" target="_blank">http://suricata-ids.org</a> | Support: <a href="http://suricata-ids.org/support/" target="_blank">http://suricata-ids.org/support/</a><br>
> List: <a href="https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users" target="_blank">https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>
> OISF: <a href="http://www.openinfosecfoundation.org/" target="_blank">http://www.openinfosecfoundation.org/</a><br>
<br>
<br>
<br>
</div></div><div class="HOEnZb"><div class="h5">--<br>
-------------------------------<br>
Anoop Saldanha<br>
<a href="http://www.poona.me" target="_blank">http://www.poona.me</a><br>
-------------------------------<br>
</div></div></blockquote></div><br></div>