[Oisf-users] tcp.segment_memcap_drop couldn't be kept at zero, no matters how much memory we assign
Fernando Sclavo
fsclavo at gmail.com
Sun Dec 2 00:32:47 UTC 2012
Hi all! Just as you suggested, I did another "tuning round" to my
suricata.yaml file, specially to the timeouts section. I was confused about
this settings, because I think that stream and flow wasn't relationed.
Here are my new settings, I must wait to monday (more load to our
datacenter network) to see the effects:
flow-timeouts:
default:
new: 10 # 30
established: 20 # 300
closed: 0
emergency-new: 5 # 10
emergency-established: 10 # 100
emergency-closed: 0
tcp:
new: 10 # 60
established: 20 # 3600
closed: 2 # 120
emergency-new: 5 # 10
emergency-established: 10 # 300
emergency-closed: 2 # 20
udp:
new: 5 # 30
established: 10 # 300
emergency-new: 2 # 10
emergency-established: 5 # 100
icmp:
new: 5 # 30
established: 5 # 300
emergency-new: 2 # 10
emergency-established: 2 # 100
I will keep you informed, thanks A LOT for your help!
Best regards!
2012/12/1 Peter Manev <petermanev at gmail.com>
> Hi ,
>
> Martin is very right about the flow-timeouts - very important, not to
> forget to adjust those.
> 300 sec is 5 min...on a busy network .... -
> tcp:
> established: 3600 - (default)
> 1 hr can have some serious impact :)
>
> It is funny you mention about the drops.. I just had a quick chat with
> Victor about drops in general just a few days ago.
> Here is some of our values/results on one of our test boxes (9.5Gb/s
> traffic):
>
> YAML:
> flow-timeouts:
>
> default:
> new: 5 #30
>
> established: 10# 300
> closed: 0
> emergency-new: 1 #10
> emergency-established: 2 #100
> emergency-closed: 0
> tcp:
> new: 5 #60
> established: 300 # 3600
> closed: 10 #30
> emergency-new: 1 # 10
> emergency-established: 5 # 300
> emergency-closed: 20 #20
> udp:
> new: 5 #30
> established: 5 # 300
> emergency-new: 5 #10
> emergency-established: 5 # 100
> icmp:
> new: 5 #30
> established: 5 % 300
> emergency-new: 5 #10
> emergency-established: 5 # 100
>
> ......
> stream:
> memcap: 16gb
> max-sessions: 20000000
> prealloc-sessions: 10000000
>
> checksum-validation: no # reject wrong csums
> #checksum-validation: yes # reject wrong csums
> inline: no # no inline mode
> reassembly:
> memcap: 12gb
> #memcap: 8gb
> depth: 12mb # reassemble 1mb into a stream
> toserver-chunk-size: 2560
> toclient-chunk-size: 2560
>
> # Host table:
> #
> # Host table is used by tagging and per host thresholding subsystems.
> #
> host:
> hash-size: 4096
> prealloc: 1000
> memcap: 16777216
>
> ......
> # Defrag settings:
>
> defrag:
> #trackers: 262144 # number of defragmented flows to follow
> #max-frags: 262144 #number of fragments per-flow
> trackers: 65535
> max-frags: 65535 # number of fragments per-flow
> prealloc: yes
> timeout: 10
>
>
>
> Al this is using af_packet 16 threads , on a 16CPU(with Hyperthrd) box 32
> GB RAM, with some special intel 10G NIC tuning, ubuntu LTS 12.04, running
> latest git with 7K EmThr rules.
> Some more info:
>
>
>
> pevman at suricata:~$ sudo grep -n "drop"
> /var/data/regit/log/suricata/stats.log | tail -48
> 2504179:capture.kernel_drops | AFPacketeth31 | 0
> 2504209:tcp.ssn_memcap_drop | AFPacketeth31 | 0
> 2504218:tcp.segment_memcap_drop | AFPacketeth31 | 0
> 2504224:capture.kernel_drops | AFPacketeth32 | 0
> 2504254:tcp.ssn_memcap_drop | AFPacketeth32 | 0
> 2504263:tcp.segment_memcap_drop | AFPacketeth32 | 0
> 2504269:capture.kernel_drops | AFPacketeth33 | 0
> 2504299:tcp.ssn_memcap_drop | AFPacketeth33 | 0
> 2504308:tcp.segment_memcap_drop | AFPacketeth33 | 0
> 2504314:capture.kernel_drops | AFPacketeth34 | 0
> 2504344:tcp.ssn_memcap_drop | AFPacketeth34 | 0
> 2504353:tcp.segment_memcap_drop | AFPacketeth34 | 0
> 2504359:capture.kernel_drops | AFPacketeth35 | 0
> 2504389:tcp.ssn_memcap_drop | AFPacketeth35 | 0
> 2504398:tcp.segment_memcap_drop | AFPacketeth35 | 0
> 2504404:capture.kernel_drops | AFPacketeth36 | 0
> 2504434:tcp.ssn_memcap_drop | AFPacketeth36 | 0
> 2504443:tcp.segment_memcap_drop | AFPacketeth36 | 0
> 2504449:capture.kernel_drops | AFPacketeth37 | 0
> 2504479:tcp.ssn_memcap_drop | AFPacketeth37 | 0
> 2504488:tcp.segment_memcap_drop | AFPacketeth37 | 0
> 2504494:capture.kernel_drops | AFPacketeth38 | 0
> 2504524:tcp.ssn_memcap_drop | AFPacketeth38 | 0
> 2504533:tcp.segment_memcap_drop | AFPacketeth38 | 0
> 2504539:capture.kernel_drops | AFPacketeth39 | 0
> 2504569:tcp.ssn_memcap_drop | AFPacketeth39 | 0
> 2504578:tcp.segment_memcap_drop | AFPacketeth39 | 0
> 2504584:capture.kernel_drops | AFPacketeth310 | 0
> 2504614:tcp.ssn_memcap_drop | AFPacketeth310 | 0
> 2504623:tcp.segment_memcap_drop | AFPacketeth310 | 0
> 2504629:capture.kernel_drops | AFPacketeth311 | 0
> 2504659:tcp.ssn_memcap_drop | AFPacketeth311 | 0
> 2504668:tcp.segment_memcap_drop | AFPacketeth311 | 0
> 2504674:capture.kernel_drops | AFPacketeth312 | 0
> 2504704:tcp.ssn_memcap_drop | AFPacketeth312 | 0
> 2504713:tcp.segment_memcap_drop | AFPacketeth312 | 0
> 2504719:capture.kernel_drops | AFPacketeth313 | 0
> 2504749:tcp.ssn_memcap_drop | AFPacketeth313 | 0
> 2504758:tcp.segment_memcap_drop | AFPacketeth313 | 0
> 2504764:capture.kernel_drops | AFPacketeth314 | 0
> 2504794:tcp.ssn_memcap_drop | AFPacketeth314 | 0
> 2504803:tcp.segment_memcap_drop | AFPacketeth314 | 0
> 2504809:capture.kernel_drops | AFPacketeth315 | 0
> 2504839:tcp.ssn_memcap_drop | AFPacketeth315 | 0
> 2504848:tcp.segment_memcap_drop | AFPacketeth315 | 0
> 2504854:capture.kernel_drops | AFPacketeth316 | 0
> 2504884:tcp.ssn_memcap_drop | AFPacketeth316 | 0
> 2504893:tcp.segment_memcap_drop | AFPacketeth316 | 0
>
> *pevman at suricata:~$ suricata --build-info*
> [10384] 1/12/2012 -- 14:28:44 - (suricata.c:560) <Info> (SCPrintBuildInfo)
> -- This is Suricata version 1.4dev (rev 005f7a2)
> [10384] 1/12/2012 -- 14:28:44 - (suricata.c:633) <Info> (SCPrintBuildInfo)
> -- Features: PCAP_SET_BUFF LIBPCAP_VERSION_MAJOR=1 PF_RING AF_PACKET
> HAVE_PACKET_FANOUT LIBCAP_NG LIBNET1.1 HAVE_HTP_URI_NORMALIZE_HOOK
> HAVE_HTP_TX_GET_RESPONSE_HEADERS_RAW HAVE_NSS PROFILING
> [10384] 1/12/2012 -- 14:28:44 - (suricata.c:647) <Info> (SCPrintBuildInfo)
> -- 64-bits, Little-endian architecture
> [10384] 1/12/2012 -- 14:28:44 - (suricata.c:649) <Info> (SCPrintBuildInfo)
> -- GCC version 4.6.3, C version 199901
> [10384] 1/12/2012 -- 14:28:44 - (suricata.c:655) <Info> (SCPrintBuildInfo)
> -- __GCC_HAVE_SYNC_COMPARE_AND_SWAP_1
> [10384] 1/12/2012 -- 14:28:44 - (suricata.c:658) <Info> (SCPrintBuildInfo)
> -- __GCC_HAVE_SYNC_COMPARE_AND_SWAP_2
> [10384] 1/12/2012 -- 14:28:44 - (suricata.c:661) <Info> (SCPrintBuildInfo)
> -- __GCC_HAVE_SYNC_COMPARE_AND_SWAP_4
> [10384] 1/12/2012 -- 14:28:44 - (suricata.c:664) <Info> (SCPrintBuildInfo)
> -- __GCC_HAVE_SYNC_COMPARE_AND_SWAP_8
> [10384] 1/12/2012 -- 14:28:44 - (suricata.c:667) <Info> (SCPrintBuildInfo)
> -- __GCC_HAVE_SYNC_COMPARE_AND_SWAP_16
> [10384] 1/12/2012 -- 14:28:44 - (suricata.c:671) <Info> (SCPrintBuildInfo)
> -- compiled with -fstack-protector
> [10384] 1/12/2012 -- 14:28:44 - (suricata.c:677) <Info> (SCPrintBuildInfo)
> -- compiled with _FORTIFY_SOURCE=2
> [10384] 1/12/2012 -- 14:28:44 - (suricata.c:680) <Info> (SCPrintBuildInfo)
> -- compiled with libhtp 0.2.11, linked against 0.2.11
>
> pevman at suricata:~$ sudo grep -n "uptime"
> /var/data/regit/log/suricata/stats.log | tail -4
> 2503442:Date: 12/1/2012 -- 14:27:56 (uptime: 0d, 18h 07m 04s)
> 2504174:Date: 12/1/2012 -- 14:28:15 (uptime: 0d, 18h 07m 23s)
> 2504906:Date: 12/1/2012 -- 14:28:34 (uptime: 0d, 18h 07m 42s)
> 2505638:Date: 12/1/2012 -- 14:28:53 (uptime: 0d, 18h 08m 01s)
>
> pevman at suricata:~$ sudo tcpstat -i eth3
> Time:1354365172 n=6106758 avg=984.85 stddev=663.77
> bps=9622763462.40
> Time:1354365177 n=6126927 avg=981.51 stddev=663.29
> bps=9621826076.80
> Time:1354365182 n=6110921 avg=984.19 stddev=662.02
> bps=9622922160.00
> Time:1354365187 n=6126978 avg=981.50 stddev=662.38
> bps=9621846648.00
> Time:1354365192 n=6109322 avg=984.46 stddev=661.25
> bps=9623061092.80
> Time:1354365197 n=6146841 avg=978.24 stddev=662.73
> bps=9620970840.00
> ^CTime:1354365202 n=112243 avg=982.41 stddev=663.97
> bps=176430308.80
>
> pevman at suricata:~$ uname -a
> Linux suricata 3.2.0-30-generic #48-Ubuntu SMP Fri Aug 24 16:52:48 UTC
> 2012 x86_64 x86_64 x86_64 GNU/Linux
> pevman at suricata:~$
>
>
>
>
>
> hope it helps.
>
> thanks
>
>
> On Sat, Dec 1, 2012 at 3:54 AM, Martin Holste <mcholste at gmail.com> wrote:
>
>> Adjust your default timeouts much lower so that streams are taken out of
>> the connection pool more quickly.
>>
>> This config is aggressive, but I think you'll find it does the trick. If
>> it doesn't work, I'd like to know:
>>
>> flow-timeouts:
>>
>> default:
>> new: 1 # 30
>> established: 10 #300
>> closed: 0
>> emergency_new: 1 #10
>> emergency_established: 1 #100
>> emergency_closed: 0
>> tcp:
>> new: 1 #60
>> established: 10 #3600
>> closed: 0 #120
>> emergency_new: 1 #10
>> emergency_established: 5 #1 #300
>> emergency_closed: 20
>> udp:
>> new: 1 #30
>> established: 1 #300
>> emergency_new: 1 #10
>> emergency_established: 1 #100
>> icmp:
>> new: 1 #30
>> established: 1 #300
>> emergency_new: 1 #10
>> emergency_established: 1 #100
>>
>>
>>
>>
>> On Fri, Nov 30, 2012 at 4:15 PM, Dave Remien <dave.remien at gmail.com>wrote:
>>
>>> Fernando,
>>>
>>> If I'm reading your config file right, you're asking for 8.3 million
>>> sessions of 512KB each? I think that works out to 4.3TB of RAM; rather more
>>> than the 64GB memcap.
>>>
>>> Cheers,
>>>
>>> Dave
>>>
>>>
>>> On Fri, Nov 30, 2012 at 10:24 AM, Fernando Sclavo <fsclavo at gmail.com>wrote:
>>>
>>>> -----BEGIN PGP SIGNED MESSAGE-----
>>>> Hash: SHA1
>>>>
>>>> Hello all!
>>>> I'm installing an IDS on our company, monitoring two core switches with
>>>> a sustained traffic of about 2gbps each. The server is a Dell R715, 32
>>>> cores, 192Gb RAM with two Intel X520 nics. Suricata version is 1.4b3.
>>>> The problem we are facing, is with tcp.segment_memcap_drop increasing
>>>> continuosly once time tcp.reassembly_memuse reaches their max size
>>>> (64gb!!)
>>>> The related suricata.yaml stanza is:
>>>>
>>>> stream:
>>>> memcap: 24gb
>>>> checksum-validation: no # reject wrong csums
>>>> inline: no # auto will use inline mode in IPS mode,
>>>> yes or no set it statically
>>>> max-sessions: 8388608
>>>> prealloc-sessions: 8388608
>>>> reassembly:
>>>> memcap: 64gb
>>>> depth: 512kb # reassemble 1mb into a stream
>>>> toserver-chunk-size: 2560
>>>> toclient-chunk-size: 2560
>>>>
>>>> Thanks in advance!
>>>> -----BEGIN PGP SIGNATURE-----
>>>> Version: GnuPG v1.4.11 (GNU/Linux)
>>>> Comment: Using GnuPG with undefined - http://www.enigmail.net/
>>>>
>>>> iQIcBAEBAgAGBQJQuOviAAoJEDtYYV2Ws9eJD18P/2+QZR+6BXnk/FfXQeCw43Xh
>>>> qynGiI3qnrg3SSaGdiWDrm0b8UuVuq/HXaAdIo0hzeDNgRLWjBKnnz4b3UA3HyIH
>>>> cKpPUsEFUyc55KPSDzDW2mCGB/V//7f/Ude5DXG7/CZ9+xJu1jhuePfuE9Nl1yIi
>>>> o3xmlI1mXXXc82rs0VGKDJ0ZwoN+/zmcnp1sW5mG42CKR2Hr9PcVKzP0IHbNZlHI
>>>> Q0ishhXNrKcGCpHn9/J9gg44af6+7a0EdnOZOEgRNtOILfK6C5N4p5cwZfMAkYnL
>>>> AcswoaER4ftBV49WpfWjTeOhEQxYaGFM8QURB0f30ODqMDoDUKX6lwjXm6+ZfQqr
>>>> Y+mGzX/WFCeFI2A4KqgNamZi1IKKd83j0AxH8nYhWa9kPtws75L5iGYAQOE5yoVw
>>>> oTnEncPlSLK+Mb/fhoc0crNeMkCKDV6uCFgpE/JKUtogG25nmcbSAIoE3Esa9iYq
>>>> dRww7KhOZttLRXjZeRkm/bl1CmBDXDJ2sZQ8jZtqpGeFlIMi4BYCyQAKsKWyAji4
>>>> 9LrDvtnew/jvWLCpNOfPrHWjRM+XbpD+k4YWO1imRWU6Or+E4Fgx9oiFNd9ni/DY
>>>> l2NrSkq9RIixCVqrpNkWsEwCxN2pftJ4h0sXqTqkkhi8Ofhui60o1uNAOqMGURoN
>>>> U30CUPowHUvuwnguE781
>>>> =vy1s
>>>> -----END PGP SIGNATURE-----
>>>> _______________________________________________
>>>> Suricata IDS Users mailing list: oisf-users at openinfosecfoundation.org
>>>> Site: http://suricata-ids.org | Support:
>>>> http://suricata-ids.org/support/
>>>> List:
>>>> https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>>>> OISF: http://www.openinfosecfoundation.org/
>>>>
>>>
>>>
>>>
>>> --
>>> ".... We are such stuff
>>> As dreams are made on; and our little life
>>> Is rounded with a sleep."
>>> -- Shakespeare, The Tempest - Act 4
>>>
>>>
>>> _______________________________________________
>>> Suricata IDS Users mailing list: oisf-users at openinfosecfoundation.org
>>> Site: http://suricata-ids.org | Support:
>>> http://suricata-ids.org/support/
>>> List:
>>> https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>>> OISF: http://www.openinfosecfoundation.org/
>>>
>>
>>
>> _______________________________________________
>> Suricata IDS Users mailing list: oisf-users at openinfosecfoundation.org
>> Site: http://suricata-ids.org | Support: http://suricata-ids.org/support/
>> List: https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>> OISF: http://www.openinfosecfoundation.org/
>>
>
>
>
> --
> Regards,
> Peter Manev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20121201/25afed9c/attachment-0002.html>
More information about the Oisf-users
mailing list