[Oisf-users] Improved PF_RING support, please test!
Chris Wakelin
c.d.wakelin at reading.ac.uk
Mon Mar 7 17:15:32 UTC 2011
On 28/02/11 20:23, Victor Julien wrote:
> Hey guys,
>
> I know a couple of you are running PF_RING in a high speed environment.
> The attached patch means to improve it's performance. It adds a new
> option called "pfring.threads" that controls the number of reader
> threads the pfring code uses. I've tested (lightly) with 1, 4 and 8
> which all worked fine. There are some more improvements, including the
> removal of one memcpy per packet...
OK, giving it a go with 4 threads ...
First thing I noticed is that stats.log doesn't seem to work ...
> Date: 3/7/2011 -- 17:03:01 (uptime: 0d, 00h 04m 34s)
> -------------------------------------------------------------------
> Counter | TM Name | Value
> -------------------------------------------------------------------
> tcp.sessions | Detect | 63734
> tcp.ssn_memcap_drop | Detect | 0
> tcp.pseudo | Detect | 12222
> tcp.segment_memcap_drop | Detect | 24038
> tcp.stream_depth_reached | Detect | 135
> detect.alert | Detect | 48
> decoder.pkts | RecvPfring1 | 0
> decoder.bytes | RecvPfring1 | 0
> decoder.ipv4 | RecvPfring1 | 0
> decoder.ipv6 | RecvPfring1 | 0
> decoder.ethernet | RecvPfring1 | 0
> decoder.raw | RecvPfring1 | 0
and the same for RecvPfring2-4.
CPU load seems a bit less:
> top - 17:05:09 up 5 days, 17:00, 6 users, load average: 1.17, 1.39, 1.13
> Tasks: 374 total, 4 running, 370 sleeping, 0 stopped, 0 zombie
> Cpu0 : 11.8%us, 4.3%sy, 0.0%ni, 83.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> Cpu1 : 5.8%us, 6.1%sy, 0.0%ni, 88.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> Cpu2 : 22.5%us, 4.2%sy, 0.0%ni, 73.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> Cpu3 : 6.7%us, 6.1%sy, 0.0%ni, 87.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> Cpu4 : 9.0%us, 4.7%sy, 0.0%ni, 86.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> Cpu5 : 11.1%us, 4.7%sy, 0.0%ni, 77.2%id, 0.0%wa, 0.0%hi, 7.0%si, 0.0%st
> Cpu6 : 19.2%us, 2.3%sy, 0.0%ni, 78.6%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> Cpu7 : 7.0%us, 4.4%sy, 0.0%ni, 88.6%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> Mem: 16465268k total, 10827736k used, 5637532k free, 84848k buffers
> Swap: 3906552k total, 104336k used, 3802216k free, 7159848k cached
>
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ P COMMAND
> 23690 snort 20 0 3705m 1.7g 1860 R 41.7 10.6 2:29.39 2 Detect4
> 23683 snort 20 0 3705m 1.7g 1860 S 19.5 10.6 1:06.17 1 RecvPfring1
> 16875 snort 20 0 466m 343m 840 S 13.9 2.1 203:36.27 5 argus
> 23687 snort 20 0 3705m 1.7g 1860 S 13.9 10.6 1:15.11 7 Detect1
> 23684 snort 20 0 3705m 1.7g 1860 S 11.9 10.6 0:45.63 2 RecvPfring2
> 23685 snort 20 0 3705m 1.7g 1860 R 9.9 10.6 0:43.27 4 RecvPfring3
> 23688 snort 20 0 3705m 1.7g 1860 S 9.9 10.6 0:57.71 6 Detect2
> 23689 snort 20 0 3705m 1.7g 1860 R 9.6 10.6 0:48.59 3 Detect3
> 23686 snort 20 0 3705m 1.7g 1860 S 7.9 10.6 0:34.02 4 RecvPfring4
> 23691 snort 20 0 3705m 1.7g 1860 S 3.6 10.6 0:12.72 5 FlowManagerThre
> 16874 snort 20 0 466m 343m 840 S 1.0 2.1 13:38.27 1 argus
> 6372 operator 20 0 19440 1560 1028 R 0.7 0.0 35:23.84 0 top
> 130 root 20 0 0 0 0 S 0.3 0.0 1:08.94 3 kondemand/3
As you'd expect I have four entries in /proc/net/pf_ring/<pid>-eth1.nn
none of which seem to think we're dropping packets. They seem to suggest
load isn't quite evenly distributed between them:
> Tot Packets : 16137770
> Tot Pkt Lost : 0
> Tot Packets : 9507268
> Tot Pkt Lost : 0
> Tot Packets : 8217276
> Tot Pkt Lost : 0
> Tot Packets : 6115710
> Tot Pkt Lost : 0
Here's the stats after stopping it:
> [23682] 7/3/2011 -- 17:12:25 - (suricata.c:1263) <Info> (main) -- signal received
> [23685] 7/3/2011 -- 17:12:25 - (source-pfring.c:328) <Info> (ReceivePfringThreadExitStats) -- (RecvPfring3) Packets 9758506, bytes 9303146143
> [23682] 7/3/2011 -- 17:12:25 - (suricata.c:1293) <Info> (main) -- time elapsed 838s
> [23685] 7/3/2011 -- 17:12:25 - (source-pfring.c:332) <Info> (ReceivePfringThreadExitStats) -- (RecvPfring3) Pfring Total:9758506 Recv:9758506 Drop:0 (0.0%).
> [23683] 7/3/2011 -- 17:12:25 - (source-pfring.c:328) <Info> (ReceivePfringThreadExitStats) -- (RecvPfring1) Packets 18155960, bytes 22838765639
> [23684] 7/3/2011 -- 17:12:25 - (source-pfring.c:328) <Info> (ReceivePfringThreadExitStats) -- (RecvPfring2) Packets 11014404, bytes 8420310935
> [23686] 7/3/2011 -- 17:12:25 - (source-pfring.c:328) <Info> (ReceivePfringThreadExitStats) -- (RecvPfring4) Packets 7277956, bytes 4276508479
> [23684] 7/3/2011 -- 17:12:25 - (source-pfring.c:332) <Info> (ReceivePfringThreadExitStats) -- (RecvPfring2) Pfring Total:11014404 Recv:11014404 Drop:0 (0.0%).
> [23686] 7/3/2011 -- 17:12:25 - (source-pfring.c:332) <Info> (ReceivePfringThreadExitStats) -- (RecvPfring4) Pfring Total:7277956 Recv:7277956 Drop:0 (0.0%).
> [23683] 7/3/2011 -- 17:12:25 - (source-pfring.c:332) <Info> (ReceivePfringThreadExitStats) -- (RecvPfring1) Pfring Total:18155960 Recv:18155960 Drop:0 (0.0%).
> [23687] 7/3/2011 -- 17:12:25 - (stream-tcp.c:3465) <Info> (StreamTcpExitPrintStats) -- (Detect1) Packets 9782286
> [23687] 7/3/2011 -- 17:12:25 - (alert-fastlog.c:324) <Info> (AlertFastLogExitPrintStats) -- (Detect1) Alerts 98
> [23687] 7/3/2011 -- 17:12:25 - (alert-unified2-alert.c:603) <Info> (Unified2AlertThreadDeinit) -- Alert unified2 module wrote 98 alerts
> [23687] 7/3/2011 -- 17:12:25 - (log-droplog.c:389) <Info> (LogDropLogExitPrintStats) -- (Detect1) Dropped Packets 0
> [23688] 7/3/2011 -- 17:12:25 - (stream-tcp.c:3465) <Info> (StreamTcpExitPrintStats) -- (Detect2) Packets 6154750
> [23688] 7/3/2011 -- 17:12:25 - (alert-fastlog.c:324) <Info> (AlertFastLogExitPrintStats) -- (Detect2) Alerts 98
> [23688] 7/3/2011 -- 17:12:25 - (log-droplog.c:389) <Info> (LogDropLogExitPrintStats) -- (Detect2) Dropped Packets 0
> [23689] 7/3/2011 -- 17:12:25 - (stream-tcp.c:3465) <Info> (StreamTcpExitPrintStats) -- (Detect3) Packets 4942272
> [23689] 7/3/2011 -- 17:12:25 - (alert-fastlog.c:324) <Info> (AlertFastLogExitPrintStats) -- (Detect3) Alerts 98
> [23689] 7/3/2011 -- 17:12:25 - (log-droplog.c:389) <Info> (LogDropLogExitPrintStats) -- (Detect3) Dropped Packets 0
> [23690] 7/3/2011 -- 17:12:25 - (stream-tcp.c:3465) <Info> (StreamTcpExitPrintStats) -- (Detect4) Packets 19799090
> [23690] 7/3/2011 -- 17:12:25 - (alert-fastlog.c:324) <Info> (AlertFastLogExitPrintStats) -- (Detect4) Alerts 98
> [23690] 7/3/2011 -- 17:12:25 - (log-droplog.c:389) <Info> (LogDropLogExitPrintStats) -- (Detect4) Dropped Packets 0
> [23691] 7/3/2011 -- 17:12:25 - (flow.c:1141) <Info> (FlowManagerThread) -- 92339 new flows, 100687 established flows were timed out, 53491 flows in closed state
> [23682] 7/3/2011 -- 17:12:26 - (stream-tcp-reassemble.c:352) <Info> (StreamTcpReassembleFree) -- Max memuse of the stream reassembly engine 1073741824 (in use 0)
> [23682] 7/3/2011 -- 17:12:26 - (stream-tcp.c:466) <Info> (StreamTcpFreeConfig) -- Max memuse of stream engine 21304080 (in use 0)
> [23682] 7/3/2011 -- 17:12:26 - (detect.c:3375) <Info> (SigAddressCleanupStage1) -- cleaning up signature grouping structure... complete
I'll run it for a bit longer now ...
Best Wishes,
Chris
--
--+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+-
Christopher Wakelin, c.d.wakelin at reading.ac.uk
IT Services Centre, The University of Reading, Tel: +44 (0)118 378 8439
Whiteknights, Reading, RG6 6AF, UK Fax: +44 (0)118 975 3094
More information about the Oisf-users
mailing list