[Oisf-users] Suricata under libvirt

Chris Boley ilgtech75 at gmail.com
Tue May 31 13:02:46 UTC 2016


Andreas, this is lengthy sorry, I tried to include useful detail but,
unfortunately I ramble on a lot:

I had this idea because I wanted to be able to have two or 3 working
Suricata test OS's for experimenting with things in the YAML and
other configs. This allows me to set up multiple VM's with varying configs
and find out what works and what doesn't.
That's why I did it.

My basic setup.
I had 4 ethernet adapters installed in a dell 2970 with latest bios, 2 - 6
Core opterons running at 2.6 ghz, and 32 gigs of RAM.

Why 4 nics? 1. host interface 2. VM Mgmt. Interface 3. Bridge-side A 4.
Bridge-Side B.


If your physical host interfaces are like em1 em2 or whatever in Ubuntu
edit /etc/default/grub

find these two lines:
GRUB_CMDLINE_LINUX_DEFAULT="splash quiet"
GRUB_CMDLINE_LINUX=""


modify to look like

GRUB_CMDLINE_LINUX_DEFAULT="splash quiet biosdevname=0"
GRUB_CMDLINE_LINUX="biosdevname=0"

sudo update-grub

Modify your operational host interface in /etc/network/interfaces file to
reflect eth0 instead of whatever it was named before.
If you don't, you'll find yourself without a working interface.

then reboot

---------------------------------------------------------------------------------------------

**macvtap with passthrough must have a dedicated physical interface to tie
itself to.


You've got to create a specialized network configuration within the libvirt
setup.
** Caveats** You need to have hardware and BIOS platform that supports
SR-IOV.
How do you figure that out? Do the homework. Or simply experiment. I just
tried building the VM initially via virt-manager.
It let me do it. Meaning, the toolkit detected the right resources to
accomplish building the parameters I gave it.

When I looked up SR-IOV capabilities online through DELL, I didn't see
where it was supported but I tried it anyway and it worked.
If it doesn't libvirt will just bark at you and tell you that you can't do
that. Here's a good link to explain SR-IOV:


http://blog.scottlowe.org/2009/12/02/what-is-sr-iov/

---A note about command line versus CLI installs
For those of you who are adamant about staying on ssh cli access only with
the host;
this script below will startup a virtual machine net install for ubuntu
14.04 64 bit in your terminal with full hardware acceleration.
The machine will initially have a NAT based virtual network interface, 16
gig of ram and 8 vcpu's.
You'll be dealing with a qcow2 based harddisk file which doesn't offer the
best performance.
but I like it because of being able to snapshot. Look up libvirt snapshot +
qcow2 and you'll find some good articles on snappshotting.
I'm not going to write a book here.  :)

I toyed with setting up the interfaces right from this install script. In
the end and 4 hours later, it ended up being
more trouble than it was worth. I used virt-manager from a xwindows GUI in
lieu of all the headache.

Plus, there are adjustments you can make in virt-manager that I've never
figured out how to do in virsh.
My most sincere recommendation is to install a really lightweight window
manager.
I tend to install Lubuntu minimal desktop during my server install and add
in some basics like leafpad but most importantly virt-manager.
Use whatever desktop you want. Maybe blackbox or fluxbox would be a great
choice as they use almost no resources.
You'll find manipulating VM's with Virt manager is much easier.
Moreover, you can tailor your cpu's settings more easily which is kind of
important for Suricata.

Alternatively, you can stick to the cli and modify the xml with 'virsh
edit'. Beware that it uses 'vi' and is kind of a PITA.
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
sudo apt-get -y install qemu-kvm libvirt-bin bridge-utils qemu-system
& virt-manager if you're going to install a GUI....

qemu-img create -f qcow2 -o preallocation=full
/home/ipsadmin/vmimgs/virtdrive.qcow2 64G

virt-install --connect=qemu:///system \
--name=IPSTEST \
--ram 16384 \
--disk
path=/home/ipsadmin/vmimgs/virtdrive.qcow2,format=qcow2,bus=virtio,cache=none,size=64
\
--vcpus=8 \
--os-type linux \
--os-variant ubuntutrusty \
--network bridge=virbr0 \
--check-cpu \
--hvm \
--location '
http://archive.ubuntu.com/ubuntu/dists/trusty/main/installer-amd64/' \
--graphics none \
--console pty,target_type=serial \
--extra-args="console=ttyS0,115200n8 serial" \


***add ' --debug ' flag into that script if you want to see verbose output
of what's going on.

---- for the second time you need to connect to the VM.-----

The first time, based on the above settings, it will connect automatically.

virsh start IPSTEST

virsh console IPSTEST


-----------------------------------------------------------------------------------------------------------------------------------
----for editing the machine xml if you want to stick to the CLI. ** This is
an example** modify to fit your needs.
Find the adapter and paste in the place of the original adapter segment,
something that looks like this below.
Obviously you need 3 adapters for the VM 2 for bridge and one for mgmt
iface. Be mindful of slot numbers and alias naming nomenclature.
Logically in the xml you'll find that most things like slot number
increment. Just make sure you don't conflict with something else.
You'll break the VM in that case. Make a backup before you edit. Eth1 in
the case below will take the place of what used to be

your virtual adapter that was natted from inside to outside the host. Now
you've tied it to a physical interface. It will pull DHCP

addresses from the physical network.


<interface type='direct'>
      <mac address='52:54:00:67:7e:5d'/>
      <source dev='eth1' mode='passthrough'/>
      <target dev='macvtap0'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x0'/>
    </interface>
<interface type='direct'>
      <mac address='52:54:00:67:7e:5d'/>
      <source dev='eth2' mode='passthrough'/>
      <target dev='macvtap0'/>
      <model type='virtio'/>
      <alias name='net1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
    </interface>
<interface type='direct'>
      <mac address='52:54:00:67:7e:5d'/>
      <source dev='eth2' mode='passthrough'/>
      <target dev='macvtap0'/>
      <model type='virtio'/>
      <alias name='net2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04'
function='0x0'/>
    </interface>
------------------------------------------------------------------------------------------------------------------------------------------

My full xml off my primary test machine looks like this:

idsadmin at SRVCHSURICATA1:~$ virsh dumpxml CSNIPESNSR01
<domain type='kvm' id='2'>
  <name>CSNIPESNSR01</name>
  <uuid>ec0d0563-aa0a-ef29-5f24-ba5ba1416b50</uuid>
  <memory unit='KiB'>16777216</memory>
  <currentMemory unit='KiB'>16777216</currentMemory>
  <vcpu placement='static'>8</vcpu>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='i686' machine='pc-i440fx-trusty'>hvm</type>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
  </features>
  <cpu mode='custom' match='exact'>
    <model fallback='allow'>Opteron_G3</model>
    <vendor>AMD</vendor>
    <feature policy='require' name='skinit'/>
    <feature policy='require' name='vme'/>
    <feature policy='require' name='mmxext'/>
    <feature policy='require' name='fxsr_opt'/>
    <feature policy='require' name='cr8legacy'/>
    <feature policy='require' name='ht'/>
    <feature policy='require' name='3dnowprefetch'/>
    <feature policy='require' name='3dnowext'/>
    <feature policy='require' name='wdt'/>
    <feature policy='require' name='extapic'/>
    <feature policy='require' name='pdpe1gb'/>
    <feature policy='require' name='osvw'/>
    <feature policy='require' name='ibs'/>
    <feature policy='require' name='cmp_legacy'/>
    <feature policy='require' name='3dnow'/>
  </cpu>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/bin/kvm-spice</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/home/idsadmin/CSNIPE/snipedrive.qcow2'/>
      <target dev='vda' bus='virtio'/>
      <serial>1</serial>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05'
function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <target dev='hdc' bus='ide'/>
      <readonly/>
      <alias name='ide0-1-0'/>
      <address type='drive' controller='0' bus='1' target='0' unit='0'/>
    </disk>
    <controller type='usb' index='0'>
      <alias name='usb0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01'
function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <controller type='ide' index='0'>
      <alias name='ide0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01'
function='0x1'/>
    </controller>
    <interface type='direct'>
      <mac address='52:54:00:b1:16:c2'/>
      <source dev='eth3' mode='passthrough'/>
      <target dev='macvtap0'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
    </interface>
    <interface type='direct'>
      <mac address='52:54:00:3e:db:eb'/>
      <source dev='eth0' mode='passthrough'/>
      <target dev='macvtap1'/>
      <model type='virtio'/>
      <alias name='net1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07'
function='0x0'/>
    </interface>
    <interface type='direct'>
      <mac address='52:54:00:1c:71:70'/>
      <source dev='eth1' mode='passthrough'/>
      <target dev='macvtap2'/>
      <model type='virtio'/>
      <alias name='net2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08'
function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/2'/>
      <target port='0'/>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/2'>
      <source path='/dev/pts/2'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='vnc' port='5900' autoport='yes' listen='127.0.0.1'>
      <listen type='address' address='127.0.0.1'/>
    </graphics>
    <sound model='ich6'>
      <alias name='sound0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04'
function='0x0'/>
    </sound>
    <video>
      <model type='cirrus' vram='9216' heads='1'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
    </memballoon>
  </devices>
  <seclabel type='dynamic' model='apparmor' relabel='yes'>
    <label>libvirt-ec0d0563-aa0a-ef29-5f24-ba5ba1416b50</label>
    <imagelabel>libvirt-ec0d0563-aa0a-ef29-5f24-ba5ba1416b50</imagelabel>
  </seclabel>
</domain>
----------------------------------------------------------------------------------------------------------------------------

I set up my interfaces on the host like what's below here. I used my eth0
and eth1 on the VM Bridge. Eth2 is my host interface.
Eth3 is the 3rd interface on the VM dedicated to mgmt interface on the VM.
This way you can SSH into the VM and manage it.

The ' interfacetune' file listed there in the interfaces script is a page
torn directly out of Peter Manev's Github page on how to
tune interfaces best for Suricata use. (thanks again Peter. Your shared
info is really awesome!) See what's in the script below.
Physical Host interfaces file:

# The loopback network interface
auto lo
iface lo inet loopback

auto eth2
iface eth2 inet static
        address 192.168.5.6
        netmask 255.255.255.0
        gateway 192.168.5.1
        dns-nameservers 192.168.5.1

auto eth3
iface eth3 inet manual
   pre-up modprobe 8021q
   post-up ifconfig $IFACE up
   pre-down ifconfig $IFACE down

auto eth0
iface eth0 inet manual
   post-up ifconfig $IFACE up
   post-up ifconfig eth0 mtu 1520
   post-up /etc/network/if-up.d/interfacetune
   post-up ethtool -s eth0 autoneg off speed 1000 duplex full
   pre-down ifconfig $IFACE down

auto eth1
iface eth1 inet manual
   post-up ifconfig $IFACE up
   post-up ifconfig eth1 mtu 1520
   post-up /etc/network/if-up.d/interfacetune
   post-up ethtool -s eth1 autoneg off speed 1000 duplex full
   pre-down ifconfig $IFACE down


---------------------------------------------------------------------------------------------------------------------


idsadmin at SRVCHSURICATA1:~$ sudo cat /etc/network/if-up.d/interfacetune
/sbin/ethtool -G $IFACE rx 4096 >/dev/null 2>&1 ;
for i in rx tx sg tso ufo gso gro lro rxvlan txvlan; do /sbin/ethtool -K
$IFACE $i off >/dev/null 2>&1; done;

/sbin/ethtool -N $IFACE rx-flow-hash udp4 sdfn >/dev/null 2>&1;
/sbin/ethtool -N $IFACE rx-flow-hash udp6 sdfn >/dev/null 2>&1;
/sbin/ethtool -C $IFACE rx-usecs 1 rx-frames 0 >/dev/null 2>&1;
/sbin/ethtool -C $IFACE adaptive-rx off >/dev/null 2>&1;

exit 0
----------------------------------------------------------------------------------------------------------------------------------------------------
HERES THE VM INTERFACE FILE:
idsadmin at SRVCHIPSSNSR01:~$ sudo cat /etc/network/interfaces
[sudo] password for idsadmin:
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
        address 192.168.5.5
        netmask 255.255.255.0
        gateway 192.168.5.1
        dns-nameservers 192.168.5.1

auto eth1
iface eth1 inet manual
   pre-up modprobe 8021q
   post-up ifconfig $IFACE up
   post-up /etc/network/if-up.d/interfacetune
   pre-down ifconfig $IFACE down

auto eth2
iface eth2 inet manual
   post-up ifconfig $IFACE up
   post-up /etc/network/if-up.d/interfacetune
   pre-down ifconfig $IFACE down


auto br0
iface br0 inet static
        address 0.0.0.0
        netmask 255.255.255.255
        bridge_ports eth1 eth2
        bridge_stp off
        post-up ifconfig eth1 mtu 1520
        post-up ifconfig eth2 mtu 1520
        post-up ethtool -s eth2 autoneg off speed 1000 duplex full
        post-up ethtool -s eth1 autoneg off speed 1000 duplex full
        post-up /etc/network/if-up.d/interfacetune
        post-down brctl delbr br0
-------------------------------------------------------------------------------------------------------------------------------
** Notes-- I was scanning a 1 gigabit trunk interface between a CISCO 3750G
interface and a cisco 2911 ISR router with dot1q sub-interfaces.
If anyone cares. CISCO devices get pissed when you inject a linux bridge in
between it's CDP neighbor.
On your cisco parent interface configs, a ' no cdp enable ' is prudent for
the interfaces that are directly facing to the Linux bridge.
No need to set up individual subinterfaces on the bridge. The 'pre-up
modprobe 8021q' is all that's necessary to teach the bridge how
to pass vlan tags correctly.
And also hard code all speed and duplex settings or you're asking for a lot
of frustration and duplexing errors.
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Last but not least, I also had HSRP (hot standby router protocol) which
uses multicast keepalives running between the router and the L3 switch.
I used these IPTABLES rules to shove the traffice toward NFQUEUE on the VM:

Compile suricata on the guest... Do it with NFQUEUE flags.. That's a whole
different procedure in and of itself.
If anyone decides to do this I'd love to see if it works in the VM with the
directly copied attributes
of the CPU within the VM and compile with hyperscan on an INTEL cpu set.
I used Gen 3 OPTERONS in my rig so I couldn't utilize hyperscan.
------------------------------------------------------------------------------------------------------------------------------------
sudo iptables -I FORWARD -m physdev --physdev-in eth1 -j NFQUEUE
--queue-balance 0:7
sudo iptables -I FORWARD -m physdev --physdev-in eth2 -j NFQUEUE
--queue-balance 0:7

I started suricata like this:
sudo suricata -q 0 -q 1 -q 2 -q 3 -q 4 -q 5 -q 6 -q 7 -c
/homeidsadmin/suricata-3.0.1/suricata.yaml

It works really well as far as I can tell. I'd post performance data too..
but I can only generate traffic with a few end user nodes as I'm not
running this in production. I'd like to see if anybody could post results
of high utilization environments with a similiar setup.
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------



On Sat, May 28, 2016 at 6:40 PM, Andreas Herz <andi at geekosphere.org> wrote:

> On 19/05/16 at 19:12, Chris Boley wrote:
> > I have been playing with using suricata ' inline ' using KVM/QEMU
> > <http://libvirt.org/drvqemu.html> by way of the libvirt toolkit.
> > I realize that the setups will vary wildly based on the hardware platform
> > capabilities. I'm wondering if anyone else here on the list could share
> > with me any experiences they've had on the networking I/O side of things
> > like tuning specifically for where it concerns suricata. For example, how
> > you have set up network configs on both the host systems and guest OS's
> to
> > get the best performance?
> > I've already got a config that's working, I'm just not sure it's the best
> > way to go about it.
>
> Can you share your config and experience?
>
> >  If anybody can let me know I'd be really interested in getting that
> input.
> > Hopefully this is an appropriate topic for the list.
>
> Sure it is!
>
> > Thanks in advance,
> > Chris
>
> > _______________________________________________
> > Suricata IDS Users mailing list: oisf-users at openinfosecfoundation.org
> > Site: http://suricata-ids.org | Support:
> http://suricata-ids.org/support/
> > List:
> https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
> > Suricata User Conference November 9-11 in Washington, DC:
> http://oisfevents.net
>
>
> --
> Andreas Herz
> _______________________________________________
> Suricata IDS Users mailing list: oisf-users at openinfosecfoundation.org
> Site: http://suricata-ids.org | Support: http://suricata-ids.org/support/
> List: https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
> Suricata User Conference November 9-11 in Washington, DC:
> http://oisfevents.net
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20160531/e883bbef/attachment-0002.html>


More information about the Oisf-users mailing list