<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<style type="text/css" style="display:none;"> P {margin-top:0;margin-bottom:0;} </style>
</head>
<body dir="ltr">
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0); background-color: rgb(255, 255, 255);">
<br>
</div>
<div>
<div id="divRplyFwdMsg" dir="ltr">
<div> </div>
</div>
<div dir="ltr">Hey Team,
<div><br>
</div>
<div>I currently have a Centos 7 box running kernel 3.10.0-1127.el7.x86_64. I have the box inline underneath a firewall and before a switch so traffic flows internet-->firewall-->suricata-->switch. And I am trying to take advantage of the AF_Packet mode.</div>
<div><br>
</div>
<div>Unfortunately the firewall sitting above Suricata only has 1GbE interfaces. To increase throughput, these interfaces are bonded together via a LACP port channel. One port channel serves the inside (internal hosts) vlan and the other serves a dmz vlan.
One the Centos7 box that is running Suricata I have bonded the proper interfaces together and setup the appropriate port channels. The Centos7 box is able to successfully bond with the firewall inside and dmz port channel and the switch inside and dmz port
channel. So in total I have four port channels, 2 going from Centos7 to firewall, and 2 going from Centos7 to switch. Each port channel has multiple interfaces that are a part of it. This all works well.</div>
<div><br>
</div>
<div>My thought is to run Suricata in AF_Packet mode to bridge the bonds together. I will detail out the bond names below:</div>
<div><br>
</div>
<div>bond_firewall (serves inside vlan with 2 1GbE interfaces)</div>
<div>bond_firewall2 (serves dmz vlan with 2 1GbE interfaces)</div>
<div>bond_switch (serves inside vlan with 4 10GbE interfaces)</div>
<div>bond_switch2 (serves dmz vlan with 4 10GbE interfaces)</div>
<div><br>
</div>
<div>My suricata config is below:</div>
<div><br>
</div>
<div>max-pending-packets: 1024</div>
<div><br>
</div>
<div>Runmode the engine should use. Please check --list-runmodes to get the available</div>
<div>runmodes for each packet acquisition method. Default depends on selected capture</div>
<div>method. 'workers' generally gives best performance.</div>
<div>runmode: workers</div>
<div>af-packet:</div>
<div>- interface: bond_firewall</div>
<div>threads: auto</div>
<div>defrag: yes</div>
<div>cluster-type: cluster_flow</div>
<div>cluster-id: 99</div>
<div>ring-size: 2000</div>
<div>copy-mode: ips</div>
<div>copy-iface: bond_switch</div>
<div>#buffer-size: 6453555</div>
<div>use-mmap: yes</div>
<div>tpacket-v3: no</div>
<div>#rollover: yes</div>
<div><br>
</div>
<div>- interface: bond_switch</div>
<div> threads: auto</div>
<div> defrag: yes</div>
<div> cluster-type: cluster_flow</div>
<div> cluster-id: 98</div>
<div> ring-size: 2000</div>
<div> copy-mode: ips</div>
<div> copy-iface: bond_firewall</div>
<div> #buffer-size: 6453555</div>
<div> use-mmap: yes</div>
<div> tpacket-v3: no</div>
<div> #rollover: yes</div>
<div>- interface: bond_firewall2</div>
<div> threads: auto</div>
<div> defrag: yes</div>
<div> cluster-type: cluster_flow</div>
<div> cluster-id: 97</div>
<div> ring-size: 2000</div>
<div> copy-mode: ips</div>
<div> copy-iface: bond_switch2</div>
<div> #buffer-size: 6453555</div>
<div> use-mmap: yes</div>
<div> tpacket-v3: no</div>
<div> #rollover: yes</div>
<div>- interface: bond_switch2</div>
<div> threads: auto</div>
<div> defrag: yes</div>
<div> cluster-type: cluster_flow</div>
<div> cluster-id: 96</div>
<div> ring-size: 2000</div>
<div> copy-mode: ips</div>
<div> copy-iface: bond_firewall2</div>
<div> #buffer-size: 6453555</div>
<div> use-mmap: yes</div>
<div> tpacket-v3: no</div>
<div> #rollover: yes</div>
<div>I then start suricata and it looks to start up ok (see images).</div>
<div><br>
</div>
<div>However, performance is brutally slow. When downloading a 2.0GB file from the internet on a host sitting below suricata, transfer rate is an average of 12 KB/s.</div>
<div><br>
</div>
<div>Just to make sure it wasn't a layer 1 issue, or OS issue I was having, I removed Suricata and used the Linux bridging kernel module to bridge together the port channels and that worked as expected, up to 10 MB/s for the same file download.</div>
<div><br>
</div>
<div>Is Suricata able to bind to these port channels? My guess is that Suricata is getting confused by the multiple interfaces that are apart of the bonds. The LACP bond is set to a transmit hash of Layer2+3 so is this a hash that is difficult for suricata
to understand when it does its own internal hashing to match a packet to a given flow?</div>
<div><br>
</div>
<div>Is there any way for me to accomplish what I am trying to do?</div>
<div><br>
</div>
<div>I really appreciate any insight any of you guys may have as I have been left really scratching my head on this. Have you seen other users achieve this in the past but maybe through different options?</div>
<div><br>
</div>
<div>Thanks so much!</div>
<div><br>
</div>
Taylor </div>
</div>
</body>
</html>