[Oisf-users] modding config to make IPS faster

Chris Boley ilgtech75 at gmail.com
Sun Apr 10 15:35:58 UTC 2016

Greetings to the oisf group.  Apologies in advance. This is long winded..
I have been reading great info from this list for quite some time.
Thanks for that most importanly!

 I'm tuning an IPS that is monitoring an 8021q link.
The link exists between a cisco catalyst 3750G and a cisco c2821 with
The cisco router operates in 'router on a stick' architecture with HSRP
between the vlan interfaces
on the switch and the dot1q subinterfaces on the router for redundancy.

I've read lots of Eric Leblond's blog info and Peter Manev's blogs. There's
stuff in my
config's/ideas from their blogs. The overall config package is actually
from a
3rd party but performance is not what I need it to be so far. I'm very
'hands on' and
want to effect as much positive change to the performance of the system as

My objective is to ignore intra site traffic completely while scanning all
traffic between the wan
and the local LAN. I'm using a somewhat underpowered server out of

 It's an 4 core atom running 2.4 ghz cores and 8Gb of RAM. It has 4 intel
nics's running igb driver.
{Ram can be upgraded if you guys recommend. No problem!} I'm planning to
upgrade to an 8 core atom.
Software platform is Ubuntu..

First, I tried to divert LAN2LAN traffic around suricata completely since I
don't want to scan intra lan traffic.
My experience using IPTABLES is quite limited so I muttled through that.
I cobbled what you see below together for chains that would pass the proper
traffic and move the other traffic
 to the scanning engine.This seems to work but I'm not even sure if I did
the rules in the most efficient / correct way?

Second, I tried to add in -q 0 -q 1 -q 2 -q 3 to the startup command.
Also adding --queue-balance 0:3 to the nfqueue iptables command.
Is that buying me any performance?

Other than rule tuning I'm still looking for ways to tune the IPS that will
speed things up.
I find that it's working but things are only being processed at a maximum
of 4 megabit
on a 40 megabit internet connection. I'm sure that Out of Order packet
reassembly is a big
player in this area and I'm curious to know how to optimize that.

It seems as if I am going to have to add more RAM for stream reassembly and
change values for key values.
I'm trying to tune the config to facilitate the scanning speeds of at least
37-40 Megabit..
 I had to assume that the 3rd party setup is fairly "vanilla". Especially
seeing it only handle 4 megabit inline.

I read some good documentation here from Peter:

I'm not exactly sure how I can apply those ideas to my link and hardware
I'm looking for words of wisdom there.

Can anyone recommend a place or URL I can find that would help me
understand the key values to insert into my startup command?
I plan to use --set commands on the startup script. It's easy to backup
upon upgrade.
I'm trying hard to avoid modifying the 3rd party suricata.yaml.
I'm assuming it'll break if we upgrade the appliance via their canned

I'm sure I need to manipulate my memcaps, and reassembly values.
Also, I don't understand threading really well and how it relates to the -q
0 -q 1 -q 2 -q 3
settings on the suricata start command. I'll shut up now and ask for
You'll find most of the pertinent settings listed below and some of my
Any questions, suggestions and feedback are welcome!
Thank you!

3rd party "suricata --dump-config" *This is what's currently in there.

default-log-dir = /var/log/suricata/
outputs = (null)
outputs.0 = unified2-alert
outputs.0.unified2-alert = (null)
outputs.0.unified2-alert.enabled = yes
outputs.0.unified2-alert.filename = unified2.alert
outputs.1 = file-store
outputs.1.file-store = (null)
outputs.1.file-store.enabled = yes
outputs.1.file-store.log-dir = /root/filestore
outputs.1.file-store.force-magic = no
outputs.1.file-store.force-md5 = no
detect-engine = (null)
detect-engine.0 = profile
detect-engine.0.profile = medium
detect-engine.1 = rule-reload
detect-engine.1.rule-reload = true
detect-engine.2 = delayed-detect
detect-engine.2.delayed-detect = yes
vlan = (null)
vlan.use-for-tracking = true
app-layer = (null)
app-layer.protocols = (null)
app-layer.protocols.tls = (null)
app-layer.protocols.tls.enabled = yes
app-layer.protocols.tls.detection-ports = (null)
app-layer.protocols.tls.detection-ports.dp = 443
app-layer.protocols.dcerpc = (null)
app-layer.protocols.dcerpc.enabled = yes
app-layer.protocols.ftp = (null)
app-layer.protocols.ftp.enabled = yes
app-layer.protocols.ssh = (null)
app-layer.protocols.ssh.enabled = yes
app-layer.protocols.smtp = (null)
app-layer.protocols.smtp.enabled = yes
app-layer.protocols.imap = (null)
app-layer.protocols.imap.enabled = detection-only
app-layer.protocols.msn = (null)
app-layer.protocols.msn.enabled = detection-only
app-layer.protocols.smb = (null)
app-layer.protocols.smb.enabled = yes
app-layer.protocols.smb.detection-ports = (null)
app-layer.protocols.smb.detection-ports.dp = 139
app-layer.protocols.dns = (null)
app-layer.protocols.dns.tcp = (null)
app-layer.protocols.dns.tcp.enabled = yes
app-layer.protocols.dns.tcp.detection-ports = (null)
app-layer.protocols.dns.tcp.detection-ports.dp = 53
app-layer.protocols.dns.udp = (null)
app-layer.protocols.dns.udp.enabled = yes
app-layer.protocols.dns.udp.detection-ports = (null)
app-layer.protocols.dns.udp.detection-ports.dp = 53
app-layer.protocols.http = (null)
app-layer.protocols.http.enabled = yes
magic-file = /usr/share/file/magic
nfq = (null)
nfq.mode = repeat
nfq.repeat-mark = 1
nfq.repeat-mask = 1
threading = (null)
threading.detect-thread-ratio = 1
logging = (null)
logging.default-log-level = info
logging.default-output-filter =
logging.outputs = (null)
logging.outputs.0 = console
logging.outputs.0.console = (null)
logging.outputs.0.console.enabled = yes
logging.outputs.1 = file
logging.outputs.1.file = (null)
logging.outputs.1.file.enabled = yes
logging.outputs.1.file.filename = /var/log/suricata.log
default-rule-path = /var/lib/cs-apd
rule-files = (null)
rule-files.0 = suricata.rules
classification-file = /var/lib/cs-apd/classification.config
reference-config-file = /var/lib/cs-apd/reference.config
vars = (null)
vars.address-groups = (null)
vars.address-groups.HOME_NET =,,
vars.address-groups.ENIP_SERVER = $HOME_NET
vars.address-groups.MODBUS_CLIENT = $HOME_NET
vars.address-groups.TELNET_SERVERS = $HOME_NET
vars.address-groups.MODBUS_SERVER = $HOME_NET
vars.address-groups.DNP3_CLIENT = $HOME_NET
vars.address-groups.FTP_SERVERS = $HOME_NET
vars.address-groups.DNS_SERVERS = $HOME_NET
vars.address-groups.SNMP_SERVERS = $HOME_NET
vars.address-groups.SQL_SERVERS = $HOME_NET
vars.address-groups.ENIP_CLIENT = $HOME_NET
vars.address-groups.HTTP_SERVERS = $HOME_NET
vars.address-groups.SMTP_SERVERS = $HOME_NET
vars.address-groups.EXTERNAL_NET = any
vars.address-groups.DNP3_SERVER = $HOME_NET
vars.port-groups = (null)
vars.port-groups.ORACLE_PORTS = 1521
vars.port-groups.SHELLCODE_PORTS = !80
vars.port-groups.DNP3_PORTS = 20000
vars.port-groups.HTTP_PORTS = [80,8080]
vars.port-groups.SSH_PORTS = 22
vars.port-groups.FTP_PORTS = 21
action-order = (null)
action-order.0 = pass
action-order.1 = drop
action-order.2 = reject
action-order.3 = alert
** Note I also have interface tuning scripts that run on the bridge
interface to disable the IF offloading.

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
        address x.x.x.x
        netmask x.x.x.x
        gateway x.x.x.x
        dns-nameservers x.x.x.x x.x.x.x
        dns-search x

auto eth2
iface eth2 inet manual
        pre-up modprobe 8021q
        post-up ifconfig $IFACE up
        pre-down ifconfig $IFACE down

auto eth3
iface eth3 inet manual
        post-up ifconfig $IFACE up
        pre-down ifconfig $IFACE down

auto br0
 iface br0 inet static
       bridge_ports eth2 eth3
       bridge_stp on
       up /sbin/ifconfig $IFACE up || /sbin/true
       post-up ifconfig eth2 mtu 1500
       post-up ifconfig eth3 mtu 1500
       post-up ethtool -s eth2 autoneg off speed 1000 duplex full
       post-up ethtool -s eth3 autoneg off speed 1000 duplex full

iptables/netfilter Suggestions here would be great if I'm botching
something up.
iptables -I FORWARD -s ! -d -j NFQUEUE
--queue-balance 0:3
iptables -A FORWARD -m physdev --physdev-in eth2 -j ACCEPT
iptables -A FORWARD -m physdev --physdev-in eth3 -j ACCEPT

iptables -I INPUT -i lo -j ACCEPT
iptables -I INPUT -i eth0 -j ACCEPT
iptables -I INPUT ! -s -j NFQUEUE --queue-balance 0:3

iptables -A OUTPUT -m physdev --physdev-in eth2 -j ACCEPT
iptables -A OUTPUT -m physdev --physdev-in eth3 -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT
iptables -A OUTPUT -o eth0 -j ACCEPT
current startup:
suricata -q 0 -q 1 -q 2 -q 3 -c /etc/suricata/suricata.yaml -D -v
Here are some things I was considering changing:

Possible changes that would buy me more filter speed by designating
specific traffic to scan.
*Add in berkeley packet filtering.

bpf_file would contain:

(ip and port 20 or 21 or 22 or 25 or 110 or 161 or 443 or 445 or 587 or 53)
or ( ip and tcp dst port 80 or (ip and tcp src port 80 and
(tcp[tcpflags] & (tcp-syn|tcp-fin) != 0 or
tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x48545450))))
((vlan and port 20 or 21 or 22 or 25 or 110 or 161 or 443 or 445 or 587 or
or ( vlan and tcp dst port 80 or (vlan and tcp src port 80 and
(tcp[tcpflags] & (tcp-syn|tcp-fin) != 0 or
tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x48545450)))
Considering starting suricata like this:
suricata -q 0 -q 1 -q 2 -q 3 -c /etc/suricata/suricata.yaml --af-packet=br0
-D -v -F /home/ipsadmin/netfilt/bpf_file

Thanks again,
Any key values I can tune or finding a place to learn more about tuning
them would be most appreciated!!

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20160410/e8b12198/attachment.html>

More information about the Oisf-users mailing list