<div dir="ltr">Hi ZK, here it is (thanks for your help!):<br><br>HW: Dell R715 - 2 x AMD Opteron(tm) Processor 6284 SE - 192Gb RAM - 2 x Dual port Intel x520 NICs (10Gbps SFP+) only one port from each NIC are being used (eth5 & eth7)<br>
<br>OS: Linux suricata 3.2.0-45-generic #70-Ubuntu SMP Wed May 29 20:12:06 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux (with various sysctl tweaks)<br><br>Suricata: 1.4.2 (from repository)<br><br>suricata.yaml<br><br>idsuser@suricata:~$ cat /etc/suricata/suricata.yaml<br>
%YAML 1.1<br>---<br><br># Suricata configuration file. In addition to the comments describing all<br># options in this file, full documentation can be found at:<br># <a href="https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Suricatayaml">https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Suricatayaml</a><br>
<br><br># Number of packets allowed to be processed simultaneously. Default is a<br># conservative 1024. A higher number will make sure CPU's/CPU cores will be<br># more easily kept busy, but may negatively impact caching.<br>
#<br># If you are using the CUDA pattern matcher (b2g_cuda below), different rules<br># apply. In that case try something like 4000 or more. This is because the CUDA<br># pattern matcher scans many packets in parallel.<br>
max-pending-packets: 2048<br><br># Runmode the engine should use. Please check --list-runmodes to get the available<br># runmodes for each packet acquisition method. Defaults to "autofp" (auto flow pinned<br># load balancing).<br>
# runmode: workers<br>runmode: workers<br><br># Specifies the kind of flow load balancer used by the flow pinned autofp mode.<br>#<br># Supported schedulers are:<br>#<br># round-robin - Flows assigned to threads in a round robin fashion.<br>
# active-packets - Flows assigned to threads that have the lowest number of<br># unprocessed packets (default).<br># hash - Flow alloted usihng the address hash. More of a random<br># technique. Was the default in Suricata 1.2.1 and older.<br>
#<br>autofp-scheduler: active-packets<br># autofp-scheduler: round-robin<br><br># Run suricata as user and group.<br>#run-as:<br># user: suri<br># group: suri<br><br># Default pid file.<br># Will use this file if no --pidfile in command options.<br>
pid-file: /var/run/suricata.pid<br><br># Daemon working directory<br># Suricata will change directory to this one if provided<br># Default: "/"<br>#daemon-directory: "/"<br><br># Preallocated size for packet. Default is 1514 which is the classical<br>
# size for pcap on ethernet. You should adjust this value to the highest<br># packet size (MTU + hardware header) on your system.<br>#default-packet-size: 1514<br><br># The default logging directory. Any log or output file will be<br>
# placed here if its not specified with a full path name. This can be<br># overridden with the -l command line parameter.<br>default-log-dir: /var/log/suricata<br><br># Configure the type of alert (and other) logging you would like.<br>
outputs:<br><br> # a line based alerts log similar to Snort's fast.log<br> - fast:<br> enabled: yes<br> filename: fast.log<br> append: yes<br> #filetype: regular # 'regular', 'unix_stream' or 'unix_dgram'<br>
<br> # alert output for use with Barnyard2<br> - unified2-alert:<br> enabled: yes<br> filename: unified2.alert<br><br> # File size limit. Can be specified in kb, mb, gb. Just a number<br> # is parsed as bytes.<br>
#limit: 32mb<br><br> # a line based log of HTTP requests (no alerts)<br> - http-log:<br> enabled: no<br> filename: http.log<br> append: yes<br> extended: yes # enable this for extended logging information<br>
#custom: yes # enabled the custom logging format (defined by customformat)<br> #customformat: "%{%D-%H:%M:%S}t.%z %{X-Forwarded-For}i %H %m %h %u %s %B %a:%p -> %A:%P"<br> #filetype: regular # 'regular', 'unix_stream' or 'unix_dgram'<br>
<br> # a line based log of TLS handshake parameters (no alerts)<br> - tls-log:<br> enabled: no # Log TLS connections.<br> filename: tls.log # File to store TLS logs.<br> #extended: yes # Log extended information like fingerprint<br>
certs-log-dir: certs # directory to store the certificates files<br><br> # a line based log to used with pcap file study.<br> # this module is dedicated to offline pcap parsing (empty output<br> # if used with another kind of input). It can interoperate with<br>
# pcap parser like wireshark via the suriwire plugin.<br> - pcap-info:<br> enabled: no<br><br> # Packet log... log packets in pcap format. 2 modes of operation: "normal"<br> # and "sguil".<br>
#<br> # In normal mode a pcap file "filename" is created in the default-log-dir,<br> # or are as specified by "dir". In Sguil mode "dir" indicates the base directory.<br> # In this base dir the pcaps are created in th directory structure Sguil expects:<br>
#<br> # $sguil-base-dir/YYYY-MM-DD/$filename.<timestamp><br> #<br> # By default all packets are logged except:<br> # - TCP streams beyond stream.reassembly.depth<br> # - encrypted streams after the key exchange<br>
#<br> - pcap-log:<br> enabled: no<br> filename: log.pcap<br><br> # File size limit. Can be specified in kb, mb, gb. Just a number<br> # is parsed as bytes.<br> limit: 1000mb<br><br> # If set to a value will enable ring buffer mode. Will keep Maximum of "max-files" of size "limit"<br>
max-files: 2000<br><br> mode: normal # normal or sguil.<br> #sguil-base-dir: /nsm_data/<br> #ts-format: usec # sec or usec second format (default) is filename.sec usec is filename.sec.usec<br> use-stream-depth: no #If set to "yes" packets seen after reaching stream inspection depth are ignored. "no" logs all packets<br>
<br> # a full alerts log containing much information for signature writers<br> # or for investigating suspected false positives.<br> - alert-debug:<br> enabled: no<br> filename: alert-debug.log<br> append: yes<br>
#filetype: regular # 'regular', 'unix_stream' or 'unix_dgram'<br><br> # alert output to prelude (<a href="http://www.prelude-technologies.com/">http://www.prelude-technologies.com/</a>) only<br>
# available if Suricata has been compiled with --enable-prelude<br> - alert-prelude:<br> enabled: no<br> profile: suricata<br> log-packet-content: no<br> log-packet-header: yes<br><br> # Stats.log contains data from various counters of the suricata engine.<br>
# The interval field (in seconds) tells after how long output will be written<br> # on the log file.<br> - stats:<br> enabled: yes<br> filename: stats.log<br> interval: 10<br><br> # a line based alerts log similar to fast.log into syslog<br>
- syslog:<br> enabled: no<br> # reported identity to syslog. If ommited the program name (usually<br> # suricata) will be used.<br> #identity: "suricata"<br> facility: local5<br> #level: Info ## possible levels: Emergency, Alert, Critical,<br>
## Error, Warning, Notice, Info, Debug<br><br> # a line based information for dropped packets in IPS mode<br> - drop:<br> enabled: no<br> filename: drop.log<br> append: yes<br> #filetype: regular # 'regular', 'unix_stream' or 'unix_dgram'<br>
<br> # output module to store extracted files to disk<br> #<br> # The files are stored to the log-dir in a format "file.<id>" where <id> is<br> # an incrementing number starting at 1. For each file "file.<id>" a meta<br>
# file "file.<id>.meta" is created.<br> #<br> # File extraction depends on a lot of things to be fully done:<br> # - stream reassembly depth. For optimal results, set this to 0 (unlimited)<br> # - http request / response body sizes. Again set to 0 for optimal results.<br>
# - rules that contain the "filestore" keyword.<br> - file-store:<br> enabled: no # set to yes to enable<br> log-dir: files # directory to store the files<br> force-magic: no # force logging magic on all stored files<br>
force-md5: no # force logging of md5 checksums<br> #waldo: file.waldo # waldo file to store the file_id across runs<br><br> # output module to log files tracked in a easily parsable json format<br> - file-log:<br>
enabled: no<br> filename: files-json.log<br> append: yes<br> #filetype: regular # 'regular', 'unix_stream' or 'unix_dgram'<br><br> force-magic: no # force logging magic on all logged files<br>
force-md5: no # force logging of md5 checksums<br><br># Magic file. The extension .mgc is added to the value here.<br>#magic-file: /usr/share/file/magic<br>magic-file: /usr/share/file/magic<br><br># When running in NFQ inline mode, it is possible to use a simulated<br>
# non-terminal NFQUEUE verdict.<br># This permit to do send all needed packet to suricata via this a rule:<br># iptables -I FORWARD -m mark ! --mark $MARK/$MASK -j NFQUEUE<br># And below, you can have your standard filtering ruleset. To activate<br>
# this mode, you need to set mode to 'repeat'<br># If you want packet to be sent to another queue after an ACCEPT decision<br># set mode to 'route' and set next-queue value.<br># On linux >= 3.6, you can set the fail-open option to yes to have the kernel<br>
# accept the packet if suricata is not able to keep pace.<br>nfq:<br># mode: accept<br># repeat-mark: 1<br># repeat-mask: 1<br># route-queue: 2<br># fail-open: yes<br><br># af-packet support<br># Set threads to > 1 to use PACKET_FANOUT support<br>
af-packet:<br># - interface: eth4<br> # Number of receive threads (>1 will enable experimental flow pinned<br> # runmode)<br> # threads: 4<br> # Default clusterid. AF_PACKET will load balance packets based on flow.<br>
# All threads/processes that will participate need to have the same<br> # clusterid.<br># cluster-id: 99<br> # Default AF_PACKET cluster type. AF_PACKET can load balance per flow or per hash.<br> # This is only supported for Linux kernel > 3.1<br>
# possible value are:<br> # * cluster_round_robin: round robin load balancing<br> # * cluster_flow: all packets of a given flow are send to the same socket<br> # * cluster_cpu: all packets treated in kernel by a CPU are send to the same socket<br>
# cluster-type: cluster_flow<br> # In some fragmentation case, the hash can not be computed. If "defrag" is set<br> # to yes, the kernel will do the needed defragmentation before sending the packets.<br>
# defrag: yes<br> # To use the ring feature of AF_PACKET, set 'use-mmap' to yes<br># use-mmap: yes<br> # Ring size will be computed with respect to max_pending_packets and number<br> # of threads. You can set manually the ring size in number of packets by setting<br>
# the following value. If you are using flow cluster-type and have really network<br> # intensive single-flow you could want to set the ring-size independantly of the number<br> # of threads:<br># ring-size: 65534<br>
# On busy system, this could help to set it to yes to recover from a packet drop<br> # phase. This will result in some packets (at max a ring flush) being non treated.<br> #use-emergency-flush: yes<br> # recv buffer size, increase value could improve performance<br>
# buffer-size: 32mb<br> # Set to yes to disable promiscuous mode<br> # disable-promisc: no<br> # Choose checksum verification mode for the interface. At the moment<br> # of the capture, some packets may be with an invalid checksum due to<br>
# offloading to the network card of the checksum computation.<br> # Possible values are:<br> # - kernel: use indication sent by kernel for each packet (default)<br> # - yes: checksum validation is forced<br>
# - no: checksum validation is disabled<br> # - auto: suricata uses a statistical approach to detect when<br> # checksum off-loading is used.<br> # Warning: 'checksum-validation' must be set to yes to have any validation<br>
#checksum-checks: kernel<br> # BPF filter to apply to this interface. The pcap filter syntax apply here.<br> #bpf-filter: port 80 or udp<br> # You can use the following variables to activate AF_PACKET tap od IPS mode.<br>
# If copy-mode is set to ips or tap, the traffic coming to the current<br> # interface will be copied to the copy-iface interface. If 'tap' is set, the<br> # copy is complete. If 'ips' is set, the packet matching a 'drop' action<br>
# will not be copied.<br> #copy-mode: ips<br> #copy-iface: eth1<br> - interface: eth5<br> threads: 16<br> cluster-id: 98<br> # FER por sugerencia de la lista cluster-type: cluster_cpu pero la carga va a 1 sola CPU<br>
cluster-type: cluster_flow<br> defrag: yes<br> use-mmap: yes<br> ring-size: 300000<br> buffer-size: 512mb<br> # use-emergency-flush: yes<br> # disable-promisc: no<br><br># - interface: eth6<br># threads: 1<br>
# cluster-id: 97<br># cluster-type: cluster_flow<br># defrag: yes<br># use-mmap: yes<br># ring-size: 65534<br># buffer-size: 32mb<br><br> - interface: eth7<br> threads: 16<br> cluster-id: 96<br> # FER por sugerencia de la lista cluster-type: cluster_cpu<br>
cluster-type: cluster_flow<br> defrag: yes<br> use-mmap: yes<br> ring-size: 300000<br> buffer-size: 512mb<br> # use-emergency-flush: yes<br><br># You can specify a threshold config file by setting "threshold-file"<br>
# to the path of the threshold config file:<br># threshold-file: /etc/suricata/threshold.config<br><br># The detection engine builds internal groups of signatures. The engine<br># allow us to specify the profile to use for them, to manage memory on an<br>
# efficient way keeping a good performance. For the profile keyword you<br># can use the words "low", "medium", "high" or "custom". If you use custom<br># make sure to define the values at "- custom-values" as your convenience.<br>
# Usually you would prefer medium/high/low.<br>#<br># "sgh mpm-context", indicates how the staging should allot mpm contexts for<br># the signature groups. "single" indicates the use of a single context for<br>
# all the signature group heads. "full" indicates a mpm-context for each<br># group head. "auto" lets the engine decide the distribution of contexts<br># based on the information the engine gathers on the patterns from each<br>
# group head.<br>#<br># The option inspection-recursion-limit is used to limit the recursive calls<br># in the content inspection code. For certain payload-sig combinations, we<br># might end up taking too much time in the content inspection code.<br>
# If the argument specified is 0, the engine uses an internally defined<br># default limit. On not specifying a value, we use no limits on the recursion.<br><br>detect-engine:<br> - profile: high<br> - custom-values:<br>
toclient-src-groups: 2<br> toclient-dst-groups: 2<br> toclient-sp-groups: 2<br> toclient-dp-groups: 3<br> toserver-src-groups: 2<br> toserver-dst-groups: 4<br> toserver-sp-groups: 2<br>
toserver-dp-groups: 25<br> - sgh-mpm-context: auto<br> - inspection-recursion-limit: 3000<br><br> # When rule-reload is enabled, sending a USR2 signal to the Suricata process<br> # will trigger a live rule reload. Experimental feature, use with care.<br>
# - rule-reload: true<br> # If set to yes, the loading of signatures will be made after the capture<br> # is started. This will limit the downtime in IPS mode.<br> # FER - delayed-detect: yes<br> # - delayed-detect: yes<br>
<br># Suricata is multi-threaded. Here the threading can be influenced.<br>threading:<br> # On some cpu's/architectures it is beneficial to tie individual threads<br> # to specific CPU's/CPU cores. In this case all threads are tied to CPU0,<br>
# and each extra CPU/core has one "detect" thread.<br> #<br> # On Intel Core2 and Nehalem CPU's enabling this will degrade performance.<br> #<br> set-cpu-affinity: yes<br> # Tune cpu affinity of suricata threads. Each family of threads can be bound<br>
# on specific CPUs.<br> cpu-affinity:<br> - management-cpu-set:<br> cpu: [ "all" ] # include only these cpus in affinity settings<br> mode: "balanced"<br> prio:<br> default: "low"<br>
- receive-cpu-set:<br> cpu: [ 0 ] # include only these cpus in affinity settings<br> - decode-cpu-set:<br> cpu: [ 0, 1 ]<br> mode: "balanced"<br> - stream-cpu-set:<br> cpu: [ "0-1" ]<br>
- detect-cpu-set:<br> cpu: [ "all" ]<br> mode: "exclusive" # run detect threads in these cpus<br> # Use explicitely 3 threads and don't compute number by using<br> # detect-thread-ratio variable:<br>
# threads: 3<br> prio:<br> # low: [ 0 ]<br> # medium: [ "1-2" ]<br> # high: [ 3 ]<br> default: "high"<br> - verdict-cpu-set:<br> cpu: [ 0 ]<br>
prio:<br> default: "high"<br> - reject-cpu-set:<br> cpu: [ 0 ]<br> prio:<br> default: "low"<br> - output-cpu-set:<br> cpu: [ "all" ]<br> prio:<br>
default: "medium"<br> <br> #<br> # By default Suricata creates one "detect" thread per available CPU/CPU core.<br> # This setting allows controlling this behaviour. A ratio setting of 2 will<br>
# create 2 detect threads for each CPU/CPU core. So for a dual core CPU this<br> # will result in 4 detect threads. If values below 1 are used, less threads<br> # are created. So on a dual core CPU a setting of 0.5 results in 1 detect<br>
# thread being created. Regardless of the setting at a minimum 1 detect<br> # thread will always be created.<br> #<br> detect-thread-ratio: 1.5<br><br># Cuda configuration.<br>cuda:<br> # The "mpm" profile. On not specifying any of these parameters, the engine's<br>
# internal default values are used, which are same as the ones specified here.<br> - mpm:<br> # Threshold limit for no of packets buffered to the GPU. Once we hit this<br> # limit, we pass the buffer to the gpu.<br>
packet-buffer-limit: 2400<br> # The maximum length for a packet that we would buffer to the gpu.<br> # Anything over this is MPM'ed on the CPU. All entries > 0 are valid.<br> # Can be specified in kb, mb, gb. Just a number indicates it's in bytes.<br>
packet-size-limit: 1500<br> # No of packet buffers we initialize. All entries > 0 are valid.<br> packet-buffers: 10<br> # The timeout limit for batching of packets in secs. If we don't fill the<br>
# buffer within this timeout limit, we pass the currently filled buffer to the gpu.<br> # All entries > 0 are valid.<br> batching-timeout: 1<br> # Specifies whether to use page-locked memory whereever possible. Accepted values<br>
# are "enabled" and "disabled".<br> page-locked: enabled<br> # The device to use for the mpm. Currently we don't support load balancing<br> # on multiple gpus. In case you have multiple devices on your system, you<br>
# can specify the device to use, using this conf. By default we hold 0, to<br> # specify the first device cuda sees. To find out device-id associated with<br> # the card(s) on the system run "suricata --list-cuda-cards".<br>
device-id: 0<br> # No of Cuda streams used for asynchronous processing. All values > 0 are valid.<br> # For this option you need a device with Compute Capability > 1.0 and<br> # page-locked enabled to have any effect.<br>
cuda-streams: 2<br><br># Select the multi pattern algorithm you want to run for scan/search the<br># in the engine. The supported algorithms are b2g, b2gc, b2gm, b3g, wumanber,<br># ac and ac-gfbs.<br>#<br># The mpm you choose also decides the distribution of mpm contexts for<br>
# signature groups, specified by the conf - "detect-engine.sgh-mpm-context".<br># Selecting "ac" as the mpm would require "detect-engine.sgh-mpm-context"<br># to be set to "single", because of ac's memory requirements, unless the<br>
# ruleset is small enough to fit in one's memory, in which case one can<br># use "full" with "ac". Rest of the mpms can be run in "full" mode.<br>#<br># There is also a CUDA pattern matcher (only available if Suricata was<br>
# compiled with --enable-cuda: b2g_cuda. Make sure to update your<br># max-pending-packets setting above as well if you use b2g_cuda.<br><br>mpm-algo: ac<br># mpm-algo: wumanber<br><br># The memory settings for hash size of these algorithms can vary from lowest<br>
# (2048) - low (4096) - medium (8192) - high (16384) - higher (32768) - max<br># (65536). The bloomfilter sizes of these algorithms can vary from low (512) -<br># medium (1024) - high (2048).<br>#<br># For B2g/B3g algorithms, there is a support for two different scan/search<br>
# algorithms. For B2g the scan algorithms are B2gScan & B2gScanBNDMq, and<br># search algorithms are B2gSearch & B2gSearchBNDMq. For B3g scan algorithms<br># are B3gScan & B3gScanBNDMq, and search algorithms are B3gSearch &<br>
# B3gSearchBNDMq.<br>#<br># For B2g the different scan/search algorithms and, hash and bloom<br># filter size settings. For B3g the different scan/search algorithms and, hash<br># and bloom filter size settings. For wumanber the hash and bloom filter size<br>
# settings.<br><br>pattern-matcher:<br> - b2gc:<br> search-algo: B2gSearchBNDMq<br> hash-size: high # FER2 low<br> bf-size: medium<br> - b2gm:<br> search-algo: B2gSearchBNDMq<br> hash-size: high # FER2 low<br>
bf-size: medium<br> - b2g:<br> search-algo: B2gSearchBNDMq<br> hash-size: high # FER2 low<br> bf-size: medium<br> - b3g:<br> search-algo: B3gSearchBNDMq<br> hash-size: high # FER2 low<br> bf-size: medium<br>
- wumanber:<br> hash-size: high # FER2 low<br> bf-size: medium<br><br># Defrag settings:<br><br>defrag:<br> memcap: 256mb<br> hash-size: 65536<br> trackers: 65536 # number of defragmented flows to follow<br>
max-frags: 65536 # number of fragments to keep (higher than trackers)<br> prealloc: yes<br> timeout: 10 # FER 60<br><br># Flow settings:<br># By default, the reserved memory (memcap) for flows is 32MB. This is the limit<br>
# for flow allocation inside the engine. You can change this value to allow<br># more memory usage for flows.<br># The hash-size determine the size of the hash used to identify flows inside<br># the engine, and by default the value is 65536.<br>
# At the startup, the engine can preallocate a number of flows, to get a better<br># performance. The number of flows preallocated is 10000 by default.<br># emergency-recovery is the percentage of flows that the engine need to<br>
# prune before unsetting the emergency state. The emergency state is activated<br># when the memcap limit is reached, allowing to create new flows, but<br># prunning them with the emergency timeouts (they are defined below).<br>
# If the memcap is reached, the engine will try to prune flows<br># with the default timeouts. If it doens't find a flow to prune, it will set<br># the emergency bit and it will try again with more agressive timeouts.<br>
# If that doesn't work, then it will try to kill the last time seen flows<br># not in use.<br># The memcap can be specified in kb, mb, gb. Just a number indicates it's<br># in bytes.<br><br>flow:<br> memcap: 3gb<br>
hash-size: 1048576 # FER 131072<br> prealloc: 1048576 # FER error? 16gb<br> emergency-recovery: 30<br><br># Specific timeouts for flows. Here you can specify the timeouts that the<br># active flows will wait to transit from the current state to another, on each<br>
# protocol. The value of "new" determine the seconds to wait after a hanshake or<br># stream startup before the engine free the data of that flow it doesn't<br># change the state to established (usually if we don't receive more packets<br>
# of that flow). The value of "established" is the amount of<br># seconds that the engine will wait to free the flow if it spend that amount<br># without receiving new packets or closing the connection. "closed" is the<br>
# amount of time to wait after a flow is closed (usually zero).<br>#<br># There's an emergency mode that will become active under attack circumstances,<br># making the engine to check flow status faster. This configuration variables<br>
# use the prefix "emergency-" and work similar as the normal ones.<br># Some timeouts doesn't apply to all the protocols, like "closed", for udp and<br># icmp.<br>flow-timeouts:<br><br> default:<br>
new: 2 # 30<br> established: 4 # 300<br> closed: 0<br> emergency-new: 1 # 10<br> emergency-established: 1 # 100<br> emergency-closed: 0<br> tcp:<br> new: 3 # 60<br> established: 5 # 3600<br> closed: 0 # 120<br>
emergency-new: 1 # 10<br> emergency-established: 1 # 300<br> emergency-closed: 0 # 20<br> udp:<br> new: 2 # 30<br> established: 3 # 300<br> emergency-new: 1 # 10<br> emergency-established: 1 # 100<br>
icmp:<br> new: 1 # 30<br> established: 2 # 300<br> emergency-new: 1 # 10<br> emergency-established: 1 # 100<br><br># Stream engine settings. Here the TCP stream tracking and reaasembly<br># engine is configured.<br>
#<br># stream:<br># memcap: 32mb # Can be specified in kb, mb, gb. Just a<br># # number indicates it's in bytes.<br># checksum-validation: yes # To validate the checksum of received<br>
# # packet. If csum validation is specified as<br># # "yes", then packet with invalid csum will not<br># # be processed by the engine stream/app layer.<br>
# # Warning: locally generated trafic can be<br># # generated without checksum due to hardware offload<br># # of checksum. You can control the handling of checksum<br>
# # on a per-interface basis via the 'checksum-checks'<br># # option<br># max-sessions: 262144 # 256k concurrent sessions<br># prealloc-sessions: 32768 # 32k sessions prealloc'd<br>
# midstream: false # don't allow midstream session pickups<br># async-oneside: false # don't enable async stream handling<br># inline: no # stream inline mode<br>#<br># reassembly:<br>
# memcap: 64mb # Can be specified in kb, mb, gb. Just a number<br># # indicates it's in bytes.<br># depth: 1mb # Can be specified in kb, mb, gb. Just a number<br>
# # indicates it's in bytes.<br># toserver-chunk-size: 2560 # inspect raw stream in chunks of at least<br># # this size. Can be specified in kb, mb,<br>
# # gb. Just a number indicates it's in bytes.<br># toclient-chunk-size: 2560 # inspect raw stream in chunks of at least<br># # this size. Can be specified in kb, mb,<br>
# # gb. Just a number indicates it's in bytes.<br>stream:<br> memcap: 16gb<br> checksum-validation: no # reject wrong csums<br> inline: no # auto will use inline mode in IPS mode, yes or no set it statically<br>
max-sessions: 20000000<br> prealloc-sessions: 10000000<br> reassembly:<br> memcap: 32gb<br> depth: 6mb # reassemble 1mb into a stream<br> toserver-chunk-size: 2560<br> toclient-chunk-size: 2560<br>
<br># Host table:<br>#<br># Host table is used by tagging and per host thresholding subsystems.<br>#<br>host:<br> hash-size: 4096<br> prealloc: 10000<br> memcap: 64mb<br><br># Logging configuration. This is not about logging IDS alerts, but<br>
# IDS output about what its doing, errors, etc.<br>logging:<br><br> # The default log level, can be overridden in an output section.<br> # Note that debug level logging will only be emitted if Suricata was<br> # compiled with the --enable-debug configure option.<br>
#<br> # This value is overriden by the SC_LOG_LEVEL env var.<br> default-log-level: info<br><br> # The default output format. Optional parameter, should default to<br> # something reasonable if not provided. Can be overriden in an<br>
# output section. You can leave this out to get the default.<br> #<br> # This value is overriden by the SC_LOG_FORMAT env var.<br> #default-log-format: "[%i] %t - (%f:%l) <%d> (%n) -- "<br><br> # A regex to filter output. Can be overridden in an output section.<br>
# Defaults to empty (no filter).<br> #<br> # This value is overriden by the SC_LOG_OP_FILTER env var.<br> default-output-filter:<br><br> # Define your logging outputs. If none are defined, or they are all<br> # disabled you will get the default - console output.<br>
outputs:<br> - console:<br> enabled: yes<br> - file:<br> enabled: yes<br> filename: /var/log/suricata/suricata.log<br> - syslog:<br> enabled: no<br> facility: local5<br> format: "[%i] <%d> -- "<br>
<br># PF_RING configuration. for use with native PF_RING support<br># for more info see <a href="http://www.ntop.org/PF_RING.html">http://www.ntop.org/PF_RING.html</a><br>pfring:<br> - interface: eth0<br> # Number of receive threads (>1 will enable experimental flow pinned<br>
# runmode)<br> threads: 1<br><br> # Default clusterid. PF_RING will load balance packets based on flow.<br> # All threads/processes that will participate need to have the same<br> # clusterid.<br> cluster-id: 99<br>
<br> # Default PF_RING cluster type. PF_RING can load balance per flow or per hash.<br> # This is only supported in versions of PF_RING > 4.1.1.<br> cluster-type: cluster_flow<br> # bpf filter for this interface<br>
#bpf-filter: tcp<br> # Choose checksum verification mode for the interface. At the moment<br> # of the capture, some packets may be with an invalid checksum due to<br> # offloading to the network card of the checksum computation.<br>
# Possible values are:<br> # - rxonly: only compute checksum for packets received by network card.<br> # - yes: checksum validation is forced<br> # - no: checksum validation is disabled<br> # - auto: suricata uses a statistical approach to detect when<br>
# checksum off-loading is used. (default)<br> # Warning: 'checksum-validation' must be set to yes to have any validation<br> #checksum-checks: auto<br> # Second interface<br> #- interface: eth1<br> # threads: 3<br>
# cluster-id: 93<br> # cluster-type: cluster_flow<br><br>pcap:<br> - interface: eth4<br> buffer-size: 1gb<br> checksum-checks: no<br> threads: 8<br> - interface: eth5<br> buffer-size: 1gb<br> checksum-checks: no<br>
threads: 8<br> - interface: eth6<br> buffer-size: 1gb<br> checksum-checks: no<br> threads: 8<br> - interface: eth7<br> buffer-size: 1gb<br> checksum-checks: no<br> threads: 8<br><br> #bpf-filter: "tcp and port 25"<br>
# Choose checksum verification mode for the interface. At the moment<br> # of the capture, some packets may be with an invalid checksum due to<br> # offloading to the network card of the checksum computation.<br>
# Possible values are:<br> # - yes: checksum validation is forced<br> # - no: checksum validation is disabled<br> # - auto: suricata uses a statistical approach to detect when<br> # checksum off-loading is used. (default)<br>
# Warning: 'checksum-validation' must be set to yes to have any validation<br> #checksum-checks: auto<br> # With some accelerator cards using a modified libpcap (like myricom), you<br> # may want to have the same number of capture threads as the number of capture<br>
# rings. In this case, set up the threads variable to N to start N threads<br> # listening on the same interface.<br> #threads: 16<br><br># For FreeBSD ipfw(8) divert(4) support.<br># Please make sure you have ipfw_load="YES" and ipdivert_load="YES"<br>
# in /etc/loader.conf or kldload'ing the appropriate kernel modules.<br># Additionally, you need to have an ipfw rule for the engine to see<br># the packets from ipfw. For Example:<br>#<br># ipfw add 100 divert 8000 ip from any to any<br>
#<br># The 8000 above should be the same number you passed on the command<br># line, i.e. -d 8000<br>#<br>ipfw:<br><br> # Reinject packets at the specified ipfw rule number. This config<br> # option is the ipfw rule number AT WHICH rule processing continues<br>
# in the ipfw processing system after the engine has finished<br> # inspecting the packet for acceptance. If no rule number is specified,<br> # accepted packets are reinjected at the divert rule which they entered<br>
# and IPFW rule processing continues. No check is done to verify<br> # this will rule makes sense so care must be taken to avoid loops in ipfw.<br> #<br> ## The following example tells the engine to reinject packets<br>
# back into the ipfw firewall AT rule number 5500:<br> #<br> # ipfw-reinjection-rule-number: 5500<br><br># Set the default rule path here to search for the files.<br># if not set, it will look at the current working dir<br>
default-rule-path: /etc/suricata/rules<br>rule-files:<br> - botcc.rules<br> - ciarmy.rules<br> - compromised.rules<br> - drop.rules<br> - dshield.rules<br> - emerging-activex.rules<br> - emerging-attack_response.rules<br>
# - emerging-chat.rules<br> - emerging-current_events.rules<br> - emerging-dns.rules<br> - emerging-dos.rules<br> - emerging-exploit.rules<br> - emerging-ftp.rules<br> - emerging-games.rules<br> - emerging-icmp_info.rules<br>
- emerging-icmp.rules<br> - emerging-imap.rules<br> - emerging-inappropriate.rules<br> - emerging-malware.rules<br> - emerging-misc.rules<br> - emerging-mobile_malware.rules<br> - emerging-netbios.rules<br># - emerging-p2p.rules<br>
- emerging-policy.rules<br> - emerging-pop3.rules<br> - emerging-rpc.rules<br> - emerging-scada.rules<br> - emerging-scan.rules<br> - emerging-shellcode.rules<br> - emerging-smtp.rules<br> - emerging-snmp.rules<br> - emerging-sql.rules<br>
- emerging-telnet.rules<br> - emerging-tftp.rules<br> - emerging-trojan.rules<br> - emerging-user_agents.rules<br> - emerging-virus.rules<br> - emerging-voip.rules<br> - emerging-web_client.rules<br> - emerging-web_server.rules<br>
- emerging-web_specific_apps.rules<br> - emerging-worm.rules<br> - rbn-malvertisers.rules<br> - rbn.rules<br> - tor.rules<br> - decoder-events.rules # available in suricata sources under rules dir<br># - stream-events.rules # available in suricata sources under rules dir<br>
- http-events.rules # available in suricata sources under rules dir<br> - smtp-events.rules # available in suricata sources under rules dir<br><br>classification-file: /etc/suricata/classification.config<br>reference-config-file: /etc/suricata/reference.config<br>
<br># Holds variables that would be used by the engine.<br>vars:<br><br> # Holds the address group vars that would be passed in a Signature.<br> # These would be retrieved during the Signature address parsing stage.<br>
address-groups:<br><br> HOME_NET: "[<a href="http://192.168.0.0/16,10.0.0.0/8,172.16.0.0/12">192.168.0.0/16,10.0.0.0/8,172.16.0.0/12</a>]"<br><br> EXTERNAL_NET: "!$HOME_NET"<br><br> HTTP_SERVERS: "$HOME_NET"<br>
<br> SMTP_SERVERS: "$HOME_NET"<br><br> SQL_SERVERS: "$HOME_NET"<br><br> DNS_SERVERS: "$HOME_NET"<br><br> TELNET_SERVERS: "$HOME_NET"<br><br> AIM_SERVERS: "$EXTERNAL_NET"<br>
<br> DNP3_SERVER: "$HOME_NET"<br><br> DNP3_CLIENT: "$HOME_NET"<br><br> MODBUS_CLIENT: "$HOME_NET"<br><br> MODBUS_SERVER: "$HOME_NET"<br><br> ENIP_CLIENT: "$HOME_NET"<br>
<br> ENIP_SERVER: "$HOME_NET"<br><br> # Holds the port group vars that would be passed in a Signature.<br> # These would be retrieved during the Signature port parsing stage.<br> port-groups:<br><br> HTTP_PORTS: "80"<br>
<br> SHELLCODE_PORTS: "!80"<br><br> ORACLE_PORTS: 1521<br><br> SSH_PORTS: 22<br><br> DNP3_PORTS: 20000<br><br># Set the order of alerts bassed on actions<br># The default order is pass, drop, reject, alert<br>
action-order:<br> - pass<br> - drop<br> - reject<br> - alert<br><br><br># Host specific policies for defragmentation and TCP stream<br># reassembly. The host OS lookup is done using a radix tree, just<br># like a routing table so the most specific entry matches.<br>
host-os-policy:<br> # Make the default policy windows.<br> windows: [<a href="http://0.0.0.0/0">0.0.0.0/0</a>]<br> bsd: []<br> bsd-right: []<br> old-linux: []<br> linux: [<a href="http://10.0.0.0/8">10.0.0.0/8</a>, 192.168.1.100, "8762:2352:6241:7245:E000:0000:0000:0000"]<br>
old-solaris: []<br> solaris: ["::1"]<br> hpux10: []<br> hpux11: []<br> irix: []<br> macos: []<br> vista: []<br> windows2k3: []<br><br><br># Limit for the maximum number of asn1 frames to decode (default 256)<br>
asn1-max-frames: 256<br><br># When run with the option --engine-analysis, the engine will read each of<br># the parameters below, and print reports for each of the enabled sections<br># and exit. The reports are printed to a file in the default log dir<br>
# given by the parameter "default-log-dir", with engine reporting<br># subsection below printing reports in its own report file.<br>engine-analysis:<br> # enables printing reports for fast-pattern for every rule.<br>
rules-fast-pattern: yes<br> # enables printing reports for each rule<br> rules: yes<br><br>#recursion and match limits for PCRE where supported<br>pcre:<br> match-limit: 3500<br> match-limit-recursion: 1500<br><br>###########################################################################<br>
# Configure libhtp.<br>#<br>#<br># default-config: Used when no server-config matches<br># personality: List of personalities used by default<br># request-body-limit: Limit reassembly of request body for inspection<br>
# by http_client_body & pcre /P option.<br># response-body-limit: Limit reassembly of response body for inspection<br># by file_data, http_server_body & pcre /Q option.<br>
# double-decode-path: Double decode path section of the URI<br># double-decode-query: Double decode query section of the URI<br>#<br># server-config: List of server configurations to use if address matches<br>
# address: List of ip addresses or networks for this block<br># personalitiy: List of personalities used by this block<br># request-body-limit: Limit reassembly of request body for inspection<br>
# by http_client_body & pcre /P option.<br># response-body-limit: Limit reassembly of response body for inspection<br># by file_data, http_server_body & pcre /Q option.<br>
# double-decode-path: Double decode path section of the URI<br># double-decode-query: Double decode query section of the URI<br>#<br># Currently Available Personalities:<br># Minimal<br># Generic<br># IDS (default)<br>
# IIS_4_0<br># IIS_5_0<br># IIS_5_1<br># IIS_6_0<br># IIS_7_0<br># IIS_7_5<br># Apache<br># Apache_2_2<br>###########################################################################<br>libhtp:<br><br> default-config:<br>
personality: IDS<br><br> # Can be specified in kb, mb, gb. Just a number indicates<br> # it's in bytes.<br> request-body-limit: 16kb<br> response-body-limit: 16kb<br><br> # inspection limits<br>
request-body-minimal-inspect-size: 16kb<br> request-body-inspect-window: 16kb<br> response-body-minimal-inspect-size: 16kb<br> response-body-inspect-window: 16kb<br><br> # decoding<br> double-decode-path: no<br>
double-decode-query: no<br><br> server-config:<br><br> - apache:<br> address: [<a href="http://192.168.0.0/16">192.168.0.0/16</a>, <a href="http://127.0.0.0/8">127.0.0.0/8</a>, "::1"]<br> personality: Apache_2_2<br>
# Can be specified in kb, mb, gb. Just a number indicates<br> # it's in bytes.<br> request-body-limit: 16kb<br> response-body-limit: 16kb<br> double-decode-path: no<br> double-decode-query: no<br>
<br> - iis7:<br> address:<br> - <a href="http://192.168.0.0/16">192.168.0.0/16</a><br> # - <a href="http://192.168.10.0/24">192.168.10.0/24</a><br> personality: IIS_7_0<br> # Can be specified in kb, mb, gb. Just a number indicates<br>
# it's in bytes.<br> request-body-limit: 16kb<br> response-body-limit: 16kb<br> double-decode-path: no<br> double-decode-query: no<br><br># Profiling settings. Only effective if Suricata has been built with the<br>
# the --enable-profiling configure flag.<br>#<br>profiling:<br><br> # rule profiling<br> rules:<br><br> # Profiling can be disabled here, but it will still have a<br> # performance impact if compiled in.<br> enabled: yes<br>
filename: rule_perf.log<br> append: yes<br><br> # Sort options: ticks, avgticks, checks, matches, maxticks<br> sort: avgticks<br><br> # Limit the number of items printed at exit.<br> limit: 100<br><br> # packet profiling<br>
packets:<br><br> # Profiling can be disabled here, but it will still have a<br> # performance impact if compiled in.<br> enabled: yes<br> filename: packet_stats.log<br> append: yes<br><br> # per packet csv output<br>
csv:<br><br> # Output can be disabled here, but it will still have a<br> # performance impact if compiled in.<br> enabled: no<br> filename: packet_stats.csv<br><br> # profiling of locking. Only available when Suricata was built with<br>
# --enable-profiling-locks.<br> locks:<br> enabled: no<br> filename: lock_stats.log<br> append: yes<br><br># Suricata core dump configuration. Limits the size of the core dump file to<br># approximately max-dump. The actual core dump size will be a multiple of the<br>
# page size. Core dumps that would be larger than max-dump are truncated. On<br># Linux, the actual core dump size may be a few pages larger than max-dump.<br># Setting max-dump to 0 disables core dumping.<br># Setting max-dump to 'unlimited' will give the full core dump file.<br>
# On 32-bit Linux, a max-dump value >= ULONG_MAX may cause the core dump size<br># to be 'unlimited'.<br><br>coredump:<br> max-dump: unlimited<br><br></div><div class="gmail_extra"><br><br><div class="gmail_quote">
2013/6/7 Listman <span dir="ltr"><<a href="mailto:list.man@bluejeantime.com" target="_blank">list.man@bluejeantime.com</a>></span><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div style="word-wrap:break-word">Can you post your configuration? Are you using a 64bit system?<div><br></div><div><br></div><div>ZK</div><div><div class="h5"><div><br></div><div><br><div><div>On Jun 7, 2013, at 8:48 AM, Fernando Sclavo <<a href="mailto:fsclavo@gmail.com" target="_blank">fsclavo@gmail.com</a>> wrote:</div>
<br><blockquote type="cite"><div dir="ltr"><div>Victor, threads are 16 in afpacket settings. Nevertheless, based on you second comment, we will move to workers mode again.<br></div>Thanks<br></div><div class="gmail_extra">
<br><br><div class="gmail_quote">
2013/6/7 Victor Julien <span dir="ltr"><<a href="mailto:lists@inliniac.net" target="_blank">lists@inliniac.net</a>></span><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div><div>On 06/07/2013 02:24 PM, Fernando Sclavo wrote:<br>
> Hi all.<br>
> Trying to balance the load on all CPUs (and finally, reduce kernel<br>
> dropped packets) we set Suricata from workers to auto mode. In this mode<br>
> CPU consumption y 1/3 than in worker mode, but stats doesn't show<br>
> packets drops anymore and couldn't see if there are dropped packets or not.<br>
> How can we see packets drops in AFpacket AUTO mode?<br>
> And another question: we see one Receive thread per NIC, and sometimes<br>
> these threads goes to 100% CPU, is there any way to split them on more<br>
> than one as we can do with detect threads?<br>
<br>
</div></div>You should be able to use the 'threads' option in the af-packet per nic<br>
settings for this.<br>
<br>
I don't recommend 'auto' mode. Autofp or workers is the way to go.<br>
<span><font color="#888888"><br>
--<br>
---------------------------------------------<br>
Victor Julien<br>
<a href="http://www.inliniac.net/" target="_blank">http://www.inliniac.net/</a><br>
PGP: <a href="http://www.inliniac.net/victorjulien.asc" target="_blank">http://www.inliniac.net/victorjulien.asc</a><br>
---------------------------------------------<br>
<br>
_______________________________________________<br>
Suricata IDS Users mailing list: <a href="mailto:oisf-users@openinfosecfoundation.org" target="_blank">oisf-users@openinfosecfoundation.org</a><br>
Site: <a href="http://suricata-ids.org/" target="_blank">http://suricata-ids.org</a> | Support: <a href="http://suricata-ids.org/support/" target="_blank">http://suricata-ids.org/support/</a><br>
List: <a href="https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users" target="_blank">https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>
OISF: <a href="http://www.openinfosecfoundation.org/" target="_blank">http://www.openinfosecfoundation.org/</a><br>
</font></span></blockquote></div><br></div>
_______________________________________________<br>Suricata IDS Users mailing list: <a href="mailto:oisf-users@openinfosecfoundation.org" target="_blank">oisf-users@openinfosecfoundation.org</a><br>Site: <a href="http://suricata-ids.org" target="_blank">http://suricata-ids.org</a> | Support: <a href="http://suricata-ids.org/support/" target="_blank">http://suricata-ids.org/support/</a><br>
List: <a href="https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users" target="_blank">https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>OISF: <a href="http://www.openinfosecfoundation.org/" target="_blank">http://www.openinfosecfoundation.org/</a></blockquote>
</div><br></div></div></div></div><br>_______________________________________________<br>
Suricata IDS Users mailing list: <a href="mailto:oisf-users@openinfosecfoundation.org">oisf-users@openinfosecfoundation.org</a><br>
Site: <a href="http://suricata-ids.org" target="_blank">http://suricata-ids.org</a> | Support: <a href="http://suricata-ids.org/support/" target="_blank">http://suricata-ids.org/support/</a><br>
List: <a href="https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users" target="_blank">https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>
OISF: <a href="http://www.openinfosecfoundation.org/" target="_blank">http://www.openinfosecfoundation.org/</a><br></blockquote></div><br></div>