<html>
<head>
<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 12pt;
font-family:Calibri
}
--></style></head>
<body class='hmmessage'><div dir='ltr'>Hello,<br><br>I am testing out Suricata on a high bandwidth link. Here are my stats.<br>I have a fiber 10 gig Span port (HP Nic) coming to my Suricata box. At peak, there is almost a 1gbps throughput with 150k packets per second.<br>Suricata box has 16 logical CPUs (I think it is 2 x Quad Core CPUs), 132gb of RAM and the rest doesnt matter. It has enough space to hold alerts or anything else.<br><br>I am using CentOS 6 64bit. I've compiled Suricata 2.0 with PFRING, GEOIP, and Profilling.<br><br>My problem is packet drop. I monitor stats.log file for info on packet drop. I get capture.kernel_drops after about 1 minute of running Suricata.<br>I have ran pfcount to check PFRING and had 0 packet loss over longer period of time. So this tells me my PFRING is configured correctly.<br>I've increased PF Ring Slots to maximum:<br>"rmmod pf_ring<br>modprobe pf_ring transparent_mode=0 min_num_slots=65534"<br>It doesnt seem to have any effect.<br><br>Now about my config file. I went through multiple guides on setting up Suricata and acquired various settings. I will try to list as much as possible.<br><br>max-pending-packets: 65534 (I've messed around with this setting almost without any difference in the end. Maybe only if this number is way low, packet drop starts to occur faster).<br><br>runmode: autofp (I've tried auto and worker as well. This one has the best results)<br><br>default-packet-size: 1522 (I've increased this one to handle VLAN tagging. Read somewhere online)<br><br>for outputs I have unified2-alert configured with default settings without XFF. <br>Also syslog and stats are configured for outputs with default settings.<br><br>Detect Engine:<br>detect-engine:<br> - profile: custom<br> - custom-values:<br> toclient-src-groups: 200<br> toclient-dst-groups: 200<br> toclient-sp-groups: 200<br> toclient-dp-groups: 300<br> toserver-src-groups: 200<br> toserver-dst-groups: 400<br> toserver-sp-groups: 200<br> toserver-dp-groups: 250<br> - sgh-mpm-context: auto<br> - inspection-recursion-limit: 3000<br><br>If I set SGH-MPM-Context to Full, Suricata will be loading forever eventually consuming 100% of RAM without starting. I have AC mpm-algorithm selected with 22k rules.<br><br>Threading. I've tried to mess around with CPU Affinity with no luck. Basically I tried to separate Detect threads from the rest to allow Detect threads to have complete control over most of the CPUs. My problem was that Suricata failed to set affinity to processes. I think my ULIMIT priority settings are messed up, so OS doesnt allow this. Otherwise threads start on correct CPUs. But in the end, drops occur and even faster than 1 minute. So I have this disabled.<br><br>detect-thread-ratio is at 1.0 I've tried changing to 0.5, 0.125, 1.5, 2.0, 10.0, 20.0. 1 and 1.5 seems to have the best results as far as packet drops goes.<br><br>Not using CUDA since I dont have NVIDIA card.<br><br>mpm-algo: ac (I have not messed around with this value at all. It seems that this one is the best if you have enough RAM).<br><br>defrag: memcap: 512mb and max-frags: 65535 (Judging from stats.log there are no issues with defrags)<br>defrag.ipv4.fragments | RxPFR2 | 2795<br>defrag.ipv4.reassembled | RxPFR2 | 1391<br>defrag.ipv4.timeouts | RxPFR2 | 0<br>defrag.ipv6.fragments | RxPFR2 | 1<br>defrag.ipv6.reassembled | RxPFR2 | 0<br>defrag.ipv6.timeouts | RxPFR2 | 0<br>defrag.max_frag_hits | RxPFR2 | 0<br><br><br>flow:<br> memcap: 10gb<br> hash-size: 65536<br> prealloc: 10000<br> emergency-recovery: 30<br><br>This one doesnt seem to be a problem as well.<br><br>(via stats.log)<br>flow_mgr.closed_pruned | FlowManagerThread | 82348<br>flow_mgr.new_pruned | FlowManagerThread | 10841<br>flow_mgr.est_pruned | FlowManagerThread | 5375<br>flow.memuse | FlowManagerThread | 26879800<br>flow.spare | FlowManagerThread | 10023<br>flow.emerg_mode_entered | FlowManagerThread | 0<br>flow.emerg_mode_over | FlowManagerThread | 0<br><br>vlan:<br> use-for-tracking: true<br>Not sure what this does but i didnt touch it.<br>Flow timeouts have been reduced to really small numbers. I think this should do more good than bad.<br><br>Now stream (where I think is the problem):<br>stream:<br> memcap: 30gb<br> checksum-validation: no # reject wrong csums<br> inline: no # auto will use inline mode in IPS mode, yes or no set it statically<br> prealloc-sesions: 10000000<br> midstream: false<br> asyn-oneside: true<br> reassembly:<br> memcap: 40gb<br> depth: 24mb # reassemble 1mb into a stream<br> toserver-chunk-size: 2560<br> toclient-chunk-size: 2560<br> randomize-chunk-size: yes<br> #randomize-chunk-range: 10<br> #raw: yes<br> chunk-prealloc: 100000<br> segments:<br> - size: 4<br> prealloc: 15000<br> - size: 16<br> prealloc: 20000<br> - size: 112<br> prealloc: 60000<br> - size: 248<br> prealloc: 60000<br> - size: 512<br> prealloc: 50000<br> - size: 768<br> prealloc: 40000<br> - size: 1448<br> prealloc: 300000<br> - size: 65535<br> prealloc: 25000<br><br>I've messed around with segment sizes until suricata stopped reporting that there were too many of specific size.<br><br>host:<br> hash-size: 4096<br> prealloc: 1000<br> memcap: 16777216<br><br>I think I might have increased memcap on this one.<br><br>For pfring:<br> - interface: bond0<br> threads: 16 (I've tried messing around with this number but with no difference except if I set it to like 2, drops occur faster)<br> cluster-id: 99<br> cluster-type: cluster_flow<br>- interface: default<br><br>In app-layer section, I've increased memcap for http to 20gb. Didnt touch anything else.<br><br>Ruleset is from ET Pro with some of ours totaling 22k.<br><br>Everything else is the same.<br><br>When Suricata starts it consumes about 4% of RAM.<br><br>If I run TOP -H, I've noticed something weird. As soon as one of the detect threads gets 100% CPU utilization, i start to see packet drops in capture.kernel_drops for some and then all threads of PFRING.<br><br>I've also disabled all of the offloading features on the nic. With eth -K ethx.<br><br>Interesting observation. I have created a bond with just one nic (my span feed). If I try to do pfring config on that interface, suricata doesnt see any packets. If I do bond, it works just fine.<br>Using iptraf, the same behavior. Not sure what that means.<br><br>Sorry for long email. I've figured this would reduce number of questions to me.<br><br>Thank you all.<br><br>--Yasha<br> </div></body>
</html>