[Oisf-users] suricata1.1 + pfring5.1, kernel run out of memory(OOM) on debian squeeze
Delta Yeh
delta.yeh at gmail.com
Sat Dec 10 16:03:23 UTC 2011
Hi all,
I have 32 bit debian squeeze box with 4G mem, runing suricata+pfring,
there is out of memory in the past two days.
Kernel is 2.6.39, suricata is 1.1, pfring is 5.1 which support pfring
bpf filter.
According to the dmesg, the memory of suricata is low(about 40MB) ,
but the kernel low memory is almost zero.
The traffic is not high at all, only 75 http req/s . It seems kernel
run out of memory for some reason.
To reproduce it, I run a http stress test with load runner,
Soon , the available kernel low mem became 220M, and it decreased continually.
Then I kill suricata( 4XMB memory according to ps output) , the
available low mem is back to 6XXM, which I think is
reasonable.
dmesg when OOM is :
[108769.088032] php5-cli invoked oom-killer: gfp_mask=0xd0, order=1,
oom_adj=0, oom_score_adj=0
[108769.088093] php5-cli cpuset=/ mems_allowed=0
[108769.088123] Pid: 32213, comm: php5-cli Not tainted 2.6.39.4 #1
[108769.088157] Call Trace:
[108769.088187] [<c1070cc6>] dump_header+0x69/0x17d
[108769.088220] [<c11e14d6>] ? ___ratelimit+0xca/0xe0
[108769.088251] [<c1070f94>] oom_kill_process+0x2e/0x201
[108769.088284] [<c1071432>] out_of_memory+0x1ee/0x2b1
[108769.088315] [<c10743ce>] __alloc_pages_nodemask+0x439/0x581
[108769.088351] [<c102e103>] copy_process+0xe1/0xd9d
[108769.088382] [<c11a2687>] ? security_file_alloc+0xf/0x11
[108769.088416] [<c10a0177>] ? get_empty_filp+0x9b/0x13f
[108769.088448] [<c102efab>] do_fork+0xd7/0x1eb
[108769.088479] [<c1008120>] sys_clone+0x1f/0x24
[108769.088509] [<c134f571>] ptregs_clone+0x15/0x24
[108769.088540] [<c134f49c>] ? sysenter_do_call+0x12/0x2c
[108769.088571] Mem-Info:
[108769.088595] Node 0 DMA per-cpu:
[108769.088623] CPU 0: hi: 0, btch: 1 usd: 0
[108769.088654] CPU 1: hi: 0, btch: 1 usd: 0
[108769.088684] Node 0 Normal per-cpu:
[108769.088713] CPU 0: hi: 186, btch: 31 usd: 0
[108769.088744] CPU 1: hi: 186, btch: 31 usd: 0
[108769.088774] Node 0 HighMem per-cpu:
[108769.088802] CPU 0: hi: 186, btch: 31 usd: 0
[108769.088833] CPU 1: hi: 186, btch: 31 usd: 0
[108769.088866] active_anon:91593 inactive_anon:4805 isolated_anon:0
[108769.088867] active_file:24583 inactive_file:8171 isolated_file:0
[108769.088868] unevictable:444 dirty:33 writeback:0 unstable:0
[108769.088869] free:599897 slab_reclaimable:47561 slab_unreclaimable:159717
[108769.088870] mapped:22437 shmem:17145 pagetables:660 bounce:0
[108769.089032] Node 0 DMA free:3536kB min:64kB low:80kB high:96kB
active_anon:0kB inactive_anon:0kB active_file:72kB inactive_file:0kB
unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15804kB
mlocked:0kB dirty:8kB writeback:0kB mapped:0kB shmem:0kB
slab_reclaimable:2876kB slab_unreclaimable:9168kB kernel_stack:256kB
pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB
pages_scanned:0 all_unreclaimable? yes
[108769.089234] lowmem_reserve[]: 0 867 4007 4007
[108769.089269] Node 0 Normal free:6696kB min:3732kB low:4664kB
high:5596kB active_anon:0kB inactive_anon:0kB active_file:48kB
inactive_file:0kB unevictable:0kB isolated(anon):0kB
isolated(file):0kB present:887976kB mlocked:0kB dirty:12kB
writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:187368kB
slab_unreclaimable:629700kB kernel_stack:864kB pagetables:0kB
unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:48
all_unreclaimable? yes
[108769.089474] lowmem_reserve[]: 0 0 25124 25124
[108769.089508] Node 0 HighMem free:2389356kB min:512kB low:3892kB
high:7272kB active_anon:366372kB inactive_anon:19220kB
active_file:98212kB inactive_file:32700kB unevictable:1776kB
isolated(anon):0kB isolated(file):0kB present:3215956kB mlocked:1776kB
dirty:112kB writeback:0kB mapped:89748kB shmem:68580kB
slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB
pagetables:2640kB unstable:0kB bounce:0kB writeback_tmp:0kB
pages_scanned:0 all_unreclaimable? no
[108769.089719] lowmem_reserve[]: 0 0 0 0
[108769.089752] Node 0 DMA: 76*4kB 16*8kB 14*16kB 8*32kB 9*64kB
4*128kB 2*256kB 2*512kB 0*1024kB 0*2048kB 0*4096kB = 3536kB
[108769.089829] Node 0 Normal: 1215*4kB 214*8kB 11*16kB 0*32kB 0*64kB
0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 6748kB
[108769.089907] Node 0 HighMem: 487*4kB 14588*8kB 7079*16kB 2792*32kB
1022*64kB 234*128kB 62*256kB 30*512kB 16*1024kB 14*2048kB 463*4096kB =
2389356kB
[108769.089990] 50284 total pagecache pages
[108769.090018] 0 pages in swap cache
[108769.090045] Swap cache stats: add 0, delete 0, find 0/0
[108769.090077] Free swap = 0kB
[108769.090102] Total swap = 0kB
[108769.103348] 1245168 pages RAM
[108769.103376] 1017346 pages HighMem
[108769.103403] 232402 pages reserved
[108769.103429] 65775 pages shared
[108769.103455] 386797 pages non-shared
[108769.103484] [ pid ] uid tgid total_vm rss cpu oom_adj
oom_score_adj name
[108769.103545] [ 1357] 0 1357 1348 228 1 0
0 sshd
[108769.103600] [ 1477] 0 1477 394 57 1 0
0 readproctitle
[108769.103657] [ 1481] 0 1481 437 96 1 0
0 svscan
[108769.103713] [ 1483] 0 1483 397 57 0 0
0 supervise
[108769.103770] [ 1484] 0 1484 432 100 1 0
0 multilog
[108769.103826] [ 1634] 0 1634 445 444 1 -17
-1000 watchdog
[108769.103883] [ 1910] 107 1910 37680 1568 1 0
0 postgres
[108769.103939] [ 1913] 107 1913 37718 16333 1 0
0 postgres
[108769.104016] [ 1914] 107 1914 37718 296 1 0
0 postgres
[108769.104081] [ 1915] 107 1915 37756 351 1 0
0 postgres
[108769.104152] [ 1916] 107 1916 3704 275 1 0
0 postgres
[108769.104212] [ 2111] 0 2111 9419 409 0 0
0 rsyslogd
[108769.104450] [ 2128] 0 2128 691 284 1 0
0 redis-server
[108769.104508] [ 2130] 0 2130 691 281 1 0
0 redis-server
[108769.104565] [ 2137] 0 2137 17079 10799 1 0
0 php5-cli
[108769.104622] [ 2140] 0 2140 17078 10734 0 0
0 php5-cli
[108769.104678] [ 2160] 107 2160 37940 1018 0 0
0 postgres
[108769.104734] [ 2161] 107 2161 37940 1024 0 0
0 postgres
[108769.104790] [ 2176] 0 2176 18371 2347 0 0
0 python
[108769.104846] [ 2177] 107 2177 37924 7389 0 0
0 postgres
[108769.104902] [ 2179] 107 2179 37951 13986 0 0
0 postgres
[108769.104958] [ 2251] 0 2251 576 159 0 0
0 getty
[108769.105014] [ 2252] 0 2252 681 294 1 0
0 rc.initial
[108769.105071] [ 1214] 0 1214 20093 13917 1 0
0 php5-cli
[108769.105127] [ 1289] 0 1289 43124 25098 1 0
0 suricata
[108769.105296] [ 1316] 107 1316 37924 990 1 0
0 postgres
[108769.105519] [ 6938] 0 6938 397 59 1 0
0 supervise
[108769.106267] [ 6942] 0 6942 435 120 0 0
0 uscheduled
[108769.106324] [29249] 0 29249 2074 664 0 0
0 sshd
[108769.106379] [29292] 0 29292 1208 356 1 0
0 sftp-server
[108769.106436] [29339] 0 29339 2034 666 1 0
0 sshd
[108769.106492] [29342] 0 29342 698 371 0 0
0 sh
[108769.106602] [31015] 0 31015 1242 313 1 0
0 sftp-server
[108769.106660] [32210] 0 32210 663 248 1 0
0 monitor
[108769.106715] [32211] 0 32211 663 249 0 0
0 safecheck
[108769.106772] [32212] 0 32212 13763 9395 0 0
0 php5-cli
[108769.106828] [32213] 0 32213 13764 9406 0 0
0 php5-cli
[108769.106883] Out of memory (oom_kill_allocating_task): Kill process
32213 (php5-cli) score 0 or sacrifice child
More information about the Oisf-users
mailing list