<html>
<head>
<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 12pt;
font-family:Calibri
}
--></style></head>
<body class='hmmessage'><div dir='ltr'>I would agree with you that my server is big for my traffic but nevertheless I keep getting closer to my RAM limit.<div>I mean, PF RING takes a lot of RAM, my stream reassembly is huge and others.</div><div><br></div><div>So far it didnt crash for two days. I will keep on tweaking.</div><div><br></div><div>Thanks for all of your input.</div><div><br></div><div>I am posting back to community.<br><br><div>> Date: Tue, 25 Nov 2014 21:11:04 +0100<br>> Subject: Re: [Oisf-users] Memory Allocations<br>> From: petermanev@gmail.com<br>> To: coolyasha@hotmail.com<br>> <br>> On Tue, Nov 25, 2014 at 7:34 PM, Yasha Zislin <coolyasha@hotmail.com> wrote:<br>> > Good to know about gaps. I guess if Suricata is not reporting packet loss<br>> > but gaps increase, it is for other reasons outside of our control.<br>> ><br>> > I use workers runmode. Latest PFRING. I've tested other modes and packet<br>> > capture and these two gave me the best results.<br>> > My packet loss is almost 0%. Occasionally, maybe once a day I get a burst of<br>> > packet loss of couple of million packets at once. Ideally I want it to be at<br>> > 0% but we get 10 mil packets per minute on both of the SPAN ports. If<br>> > Suricata can stay up without running out of RAM, these packet loss would be<br>> > probably acceptable.<br>> > I have two 10gig fiber nics : Broadcom Corporation Netxtreme II BCM57810<br>> ><br>> > I dont think I can use CPU affinity since it only applies to Intel cards. I<br>> > saw couple of your articles about that. I've tried it on mine, but such<br>> > settings.<br>> ><br>> > I have 2 x Intel Xeon CPU E5-2690 (3ghz) CPUs which makes 40 logical.<br>> > CPU doesnt get overutilized. I have 20 pfring threads per SPAN port. After<br>> > trying various combinations, this one was the best fit.<br>> <br>> <br>> This set up can handle 4x times the traffic in my opinion :) - so you<br>> should not be worried in that respect I think :)<br>> Please let the list know if you have overcome your OOM troubles or if<br>> you need any further help :)<br>> <br>> Thank you<br>> <br>> ><br>> >> Date: Mon, 24 Nov 2014 23:03:41 +0100<br>> ><br>> >> Subject: Re: [Oisf-users] Memory Allocations<br>> >> From: petermanev@gmail.com<br>> >> To: coolyasha@hotmail.com<br>> >><br>> >> On Mon, Nov 24, 2014 at 9:19 PM, Yasha Zislin <coolyasha@hotmail.com><br>> >> wrote:<br>> >> > Good to know. I will decrease my flow values and stream.<br>> >> > My goal is to avoid packet loss.<br>> >> > What kind of counters should I be looking at when stream memcap gets<br>> >> > maxed<br>> >> > out?<br>> >> > I get a lot of these:<br>> >> > tcp.reassembly_gap | RxPFReth219 | 473308<br>> >> ><br>> >><br>> >> gaps are streams that have... gaps :) thus not allowing for proper<br>> >> inspection.<br>> >> Gaps can occur for various of reasons related to port mirroring and or<br>> >> traffic loss on the way to the mirror port and other...<br>> >><br>> >> > As far as I know, these are lost packets.<br>> >> ><br>> >> > As far as my traffic goes:<br>> >> > - I monitor 2 SPAN ports (each is a 10gig card but traffic doesnt go<br>> >> > over<br>> >> > 2gigs).<br>> >> > - I use 20 logical CPUs for each SPAN Port.<br>> >> > - 97% of the traffic is HTTP<br>> >> ><br>> >><br>> >> What do you use for runmode? afpacket/pfring?<br>> >> What is your packet loss? (percentage wise)<br>> >> What is your NIC model?(do you use CPU affinity)<br>> >> What are your CPUs?<br>> >><br>> >> > So in my mind, stream, flow and timeouts are what is important and can<br>> >> > cause<br>> >> > packet loss.<br>> >> ><br>> >> > As far as RAM usage, suricata does crash. I get a lot of kernel messages<br>> >> > in<br>> >> > /var/log/messages when it dies.<br>> >> > Such as:<br>> >> > kernel: Out of memory: Kill process 4094 (RxPFReth04) score 982 or<br>> >> > sacrifice<br>> >> > child<br>> >> > Judging from TOP, SWAP barely gets used.<br>> >><br>> >><br>> >> yep ..that looks like OOM<br>> >><br>> >><br>> >> ><br>> >> > Oh, and I am not using 2.1 version of Suricata. I am on 2.0.4<br>> >> ><br>> >> > I appreciate your time helping me. Not everything is available in the<br>> >> > documentation. Trying to learn this wonderful IDS solution :)<br>> >> ><br>> >> ><br>> >> ><br>> >> ><br>> >> >> Date: Mon, 24 Nov 2014 20:00:03 +0100<br>> >> ><br>> >> >> Subject: Re: [Oisf-users] Memory Allocations<br>> >> >> From: petermanev@gmail.com<br>> >> >> To: coolyasha@hotmail.com<br>> >> >><br>> >> >> On Mon, Nov 24, 2014 at 7:47 PM, Yasha Zislin <coolyasha@hotmail.com><br>> >> >> wrote:<br>> >> >> > For reassembly, chunk-prealloc needs to be greater or equal than the<br>> >> >> > sum<br>> >> >> > of<br>> >> >> > all segements prealloc.<br>> >> >> > To get total reassembly ram needs sum up each segment's (size x<br>> >> >> > prealloc).<br>> >> >> > I've selected appropriate prealloc for each segment size after<br>> >> >> > running<br>> >> >> > suricata for a while and in debug mode it tells you if specific<br>> >> >> > segment<br>> >> >> > size<br>> >> >> > was exceeded.<br>> >> >> > So with these values, I need a bit under 40gb, which is fine.<br>> >> >> > For regular stream memcap, I have 2mil prealloc-sessions and i didnt<br>> >> >> > set<br>> >> >> > hash-size. I assume it is using default.<br>> >> >> > I keep decreasing my memcap and right now it is 10gb. Sounds like i<br>> >> >> > can<br>> >> >> > go<br>> >> >> > down all the way to like 3 gb.<br>> >> >><br>> >> >> You should further lower it - more info why -<br>> >> >><br>> >> >><br>> >> >> http://pevma.blogspot.se/2014/08/suricata-flows-flow-managers-and-effect.html<br>> >> >><br>> >> >><br>> >> >> ><br>> >> >> > Now, I wonder because of which buffer my suricata runs out of memory.<br>> >> >><br>> >> >> Suricata does not run out of memory. Suricata I suspect uses all the<br>> >> >> memory allocated to it ...which is more than what the box has - hence<br>> >> >> goes into swap.<br>> >> >><br>> >> >> ><br>> >> >> > So after Suricata gets started and allocates whatever RAM for itself,<br>> >> >> > which<br>> >> >> > buffers are not fully allocated? It sounds like stream reassembly<br>> >> >> > buffer<br>> >> >> > doesnt get utilized right away.<br>> >> >> ><br>> >> >> > What about flows? isn't that kind of important? But judging from<br>> >> >> > stats.log I<br>> >> >> > dont have any negative entries for flow:<br>> >> >> > flow_mgr.closed_pruned | FlowManagerThread | 122808768<br>> >> >> > flow_mgr.new_pruned | FlowManagerThread | 23052773<br>> >> >> > flow_mgr.est_pruned | FlowManagerThread | 17538938<br>> >> >> > flow.memuse | FlowManagerThread | 11786200760<br>> >> >> > flow.spare | FlowManagerThread | 40006294<br>> >> >> > flow.emerg_mode_entered | FlowManagerThread | 0<br>> >> >> > flow.emerg_mode_over | FlowManagerThread | 0<br>> >> >><br>> >> >> As long as you do not enter emergency mode you are ok.(so keep lowering<br>> >> >> it)<br>> >> >><br>> >> >> ><br>> >> >> ><br>> >> >> > For config, I have<br>> >> >> > flow:<br>> >> >> > memcap: 15gb<br>> >> >> > hash-size: 3000000<br>> >> >> > prealloc: 40000000<br>> >> >> > emergency-recovery: 30<br>> >> >> ><br>> >> >><br>> >> >><br>> >> >> this is a seriously excessive setting for flow - please read my<br>> >> >> comments<br>> >> >> above.<br>> >> >><br>> >> >> > And also, timeouts<br>> >> >> > flow-timeouts:<br>> >> >> ><br>> >> >> > default:<br>> >> >> > new: 3<br>> >> >> > established: 30<br>> >> >> > closed: 0<br>> >> >> > emergency-new: 10<br>> >> >> > emergency-established: 10<br>> >> >> > emergency-closed: 0<br>> >> >> > tcp:<br>> >> >> > new: 6<br>> >> >> > established: 100<br>> >> >> > closed: 0<br>> >> >> > emergency-new: 1<br>> >> >> > emergency-established: 5<br>> >> >> > emergency-closed: 2<br>> >> >> > udp:<br>> >> >> > new: 3<br>> >> >> > established: 30<br>> >> >> > emergency-new: 3<br>> >> >> > emergency-established: 10<br>> >> >> > icmp:<br>> >> >> > new: 3<br>> >> >> > established: 30<br>> >> >> > emergency-new: 1<br>> >> >> > emergency-established: 10<br>> >> >> ><br>> >> >><br>> >> >> The timeouts are fine in my opinion. But then again I do not know what<br>> >> >> type of traffic you are monitoring and how much of it.<br>> >> >><br>> >> >> > Thanks.<br>> >> >> ><br>> >> >> ><br>> >> >> >> Date: Mon, 24 Nov 2014 19:01:35 +0100<br>> >> >> ><br>> >> >> >> Subject: Re: [Oisf-users] Memory Allocations<br>> >> >> >> From: petermanev@gmail.com<br>> >> >> >> To: coolyasha@hotmail.com<br>> >> >> >><br>> >> >> >> On Fri, Nov 21, 2014 at 4:25 PM, Yasha Zislin<br>> >> >> >> <coolyasha@hotmail.com><br>> >> >> >> wrote:<br>> >> >> >> > Cool. Good to know that these are separate from each other.<br>> >> >> >> > So I understand how to calculate reassembly memcap (from your<br>> >> >> >> > article<br>> >> >> >> > I<br>> >> >> >> > believe).<br>> >> >> >> > I am not 100% clear how to calculate stream memcap.<br>> >> >> >><br>> >> >> >> How did you calculate reassembly memcap?<br>> >> >> >><br>> >> >> >><br>> >> >> >> ><br>> >> >> >> > Also, I cant seem to figure out how much flow memcap I need. I<br>> >> >> >> > know<br>> >> >> >> > how<br>> >> >> >> > it<br>> >> >> >> > is being calculated with hash size and prealloc, but I dont know<br>> >> >> >> > how<br>> >> >> >> > much<br>> >> >> >> > hash-size and prealloc I need. Looking at stats.log, I always have<br>> >> >> >> > flow<br>> >> >> >> > spare values.<br>> >> >> >><br>> >> >> >> Usually 1GB with 1mil prealloc is ok . Keep an eye on your emergency<br>> >> >> >> mode in your stats.log - as long as you do not enter that you are<br>> >> >> >> good.<br>> >> >> >> hash-size: 1048576<br>> >> >> >> prealloc: 1048576<br>> >> >> >><br>> >> >> >> ><br>> >> >> >> > Can you advise me on these two?<br>> >> >> >> ><br>> >> >> >> > Thank you very much.<br>> >> >> >> ><br>> >> >> >> ><br>> >> >> >> ><br>> >> >> >> >> Date: Fri, 21 Nov 2014 09:44:04 +0100<br>> >> >> >> ><br>> >> >> >> >> Subject: Re: [Oisf-users] Memory Allocations<br>> >> >> >> >> From: petermanev@gmail.com<br>> >> >> >> >> To: coolyasha@hotmail.com<br>> >> >> >> >><br>> >> >> >> >> On Thu, Nov 20, 2014 at 7:45 PM, Yasha Zislin<br>> >> >> >> >> <coolyasha@hotmail.com><br>> >> >> >> >> wrote:<br>> >> >> >> >> > That's what I was asking in the original question :) do these<br>> >> >> >> >> > values<br>> >> >> >> >> > sum<br>> >> >> >> >> > up<br>> >> >> >> >> > or are part of each other?<br>> >> >> >> >> > so which one is used for segments? or what is the purpose of<br>> >> >> >> >> > both?<br>> >> >> >> >><br>> >> >> >> >><br>> >> >> >> >> OK - apologies I didn't understand your question.<br>> >> >> >> >> Yes they are separate independent values.<br>> >> >> >> >><br>> >> >> >> >> Reassembly memcap is with respect to all segments that need<br>> >> >> >> >> reassembly.<br>> >> >> >> >> Stream memcap is for .. stream :) - ones that are of no need for<br>> >> >> >> >> reassembly<br>> >> >> >> >><br>> >> >> >> >> ><br>> >> >> >> >> ><br>> >> >> >> >> ><br>> >> >> >> >> > On 20 nov 2014, at 19:34, Yasha Zislin <coolyasha@hotmail.com><br>> >> >> >> >> > wrote:<br>> >> >> >> >> ><br>> >> >> >> >> > Here is my section for stream<br>> >> >> >> >> ><br>> >> >> >> >> > stream:<br>> >> >> >> >> > memcap: 60gb<br>> >> >> >> >> > checksum-validation: no # reject wrong csums<br>> >> >> >> >> > inline: no # auto will use inline mode in IPS mode, yes<br>> >> >> >> >> > or no set it statically<br>> >> >> >> >> > prealloc-sessions: 2000000<br>> >> >> >> >> > midstream: false<br>> >> >> >> >> > asyn-oneside: false<br>> >> >> >> >> > reassembly:<br>> >> >> >> >> > memcap: 80gb<br>> >> >> >> >> ><br>> >> >> >> >> ><br>> >> >> >> >> > You have 60gb for stream and 80gb for reassembly = 140 ... You<br>> >> >> >> >> > are<br>> >> >> >> >> > right<br>> >> >> >> >> > away going into swap with this (if the memcaps are reached) as<br>> >> >> >> >> > compared<br>> >> >> >> >> > to<br>> >> >> >> >> > 132gb ram that you have on the box.<br>> >> >> >> >> ><br>> >> >> >> >> ><br>> >> >> >> >> > depth: 20mb # reassemble 1mb into a stream<br>> >> >> >> >> > toserver-chunk-size: 2560<br>> >> >> >> >> > toclient-chunk-size: 2560<br>> >> >> >> >> > randomize-chunk-size: yes<br>> >> >> >> >> > #randomize-chunk-range: 10<br>> >> >> >> >> > #raw: yes<br>> >> >> >> >> > chunk-prealloc: 3000000<br>> >> >> >> >> > segments:<br>> >> >> >> >> > - size: 4<br>> >> >> >> >> > prealloc: 15000<br>> >> >> >> >> > - size: 16<br>> >> >> >> >> > prealloc: 200000<br>> >> >> >> >> > - size: 112<br>> >> >> >> >> > prealloc: 400000<br>> >> >> >> >> > - size: 248<br>> >> >> >> >> > prealloc: 300000<br>> >> >> >> >> > - size: 512<br>> >> >> >> >> > prealloc: 200000<br>> >> >> >> >> > - size: 768<br>> >> >> >> >> > prealloc: 100000<br>> >> >> >> >> > - size: 1448<br>> >> >> >> >> > prealloc: 1000000<br>> >> >> >> >> > - size: 65535<br>> >> >> >> >> > prealloc: 400000<br>> >> >> >> >> ><br>> >> >> >> >> ><br>> >> >> >> >> > ________________________________<br>> >> >> >> >> > Subject: Re: [Oisf-users] Memory Allocations<br>> >> >> >> >> > From: petermanev@gmail.com<br>> >> >> >> >> > Date: Thu, 20 Nov 2014 19:32:51 +0100<br>> >> >> >> >> > To: coolyasha@hotmail.com<br>> >> >> >> >> ><br>> >> >> >> >> > Hi,<br>> >> >> >> >> ><br>> >> >> >> >> > What is the "reassembly" memcap?<br>> >> >> >> >> > Thanks<br>> >> >> >> >> ><br>> >> >> >> >> > Regards,<br>> >> >> >> >> > Peter Manev<br>> >> >> >> >> ><br>> >> >> >> >> > On 20 nov 2014, at 19:16, Yasha Zislin <coolyasha@hotmail.com><br>> >> >> >> >> > wrote:<br>> >> >> >> >> ><br>> >> >> >> >> > I am running 2.0.4.<br>> >> >> >> >> ><br>> >> >> >> >> > Here are ALL memcap values:<br>> >> >> >> >> > defrag - 2gb<br>> >> >> >> >> > flow - 15gb<br>> >> >> >> >> > Stream - 60gb<br>> >> >> >> >> > Host - 10gb<br>> >> >> >> >> > === 87GB<br>> >> >> >> >> ><br>> >> >> >> >> > I've created such high memcap for Stream because it seems to<br>> >> >> >> >> > help<br>> >> >> >> >> > with<br>> >> >> >> >> > packet loss.<br>> >> >> >> >> > I am monitoring two interfaces with 10million packets per<br>> >> >> >> >> > minute<br>> >> >> >> >> > on<br>> >> >> >> >> > each<br>> >> >> >> >> > on<br>> >> >> >> >> > average. Most traffic is HTTP.<br>> >> >> >> >> > Can you recommend memcaps for each section?<br>> >> >> >> >> ><br>> >> >> >> >> > Thanks.<br>> >> >> >> >> ><br>> >> >> >> >> ><br>> >> >> >> >> >> Date: Thu, 20 Nov 2014 18:58:26 +0100<br>> >> >> >> >> >> Subject: Re: [Oisf-users] Memory Allocations<br>> >> >> >> >> >> From: petermanev@gmail.com<br>> >> >> >> >> >> To: coolyasha@hotmail.com<br>> >> >> >> >> >> CC: oisf-users@lists.openinfosecfoundation.org<br>> >> >> >> >> >><br>> >> >> >> >> >> On Thu, Nov 20, 2014 at 6:50 PM, Yasha Zislin<br>> >> >> >> >> >> <coolyasha@hotmail.com><br>> >> >> >> >> >> wrote:<br>> >> >> >> >> >> > I dont know if swap starts to be used by Suricata crashes<br>> >> >> >> >> >> > after<br>> >> >> >> >> >> > couple<br>> >> >> >> >> >> > of<br>> >> >> >> >> >> > days of running.<br>> >> >> >> >> >> > In system logs, I have kernel messages such as this:<br>> >> >> >> >> >> > kernel: RxPFReth22 invoked oom-killer: gfp_mask=0x201da,<br>> >> >> >> >> >> > order=0,<br>> >> >> >> >> >> > oom_adj=0,<br>> >> >> >> >> >> > oom_score_adj=0<br>> >> >> >> >> >> > kernel: RxPFReth22 cpuset=/ mems_allowed=0-1<br>> >> >> >> >> >> > kernel: Pid: 60417, comm: RxPFReth22 Not tainted<br>> >> >> >> >> >> > 2.6.32-504.el6.x86_64<br>> >> >> >> >> >> > #1<br>> >> >> >> >> >> ><br>> >> >> >> >> >> > Then after a ton of stack traces and memory errors, I see<br>> >> >> >> >> >> > this:<br>> >> >> >> >> >> > kernel: Out of memory: Kill process 59782 (Suricata-Main)<br>> >> >> >> >> >> > score<br>> >> >> >> >> >> > 985<br>> >> >> >> >> >> > or<br>> >> >> >> >> >> > sacrifice child<br>> >> >> >> >> >> > Killed process 59782, UID 501, (Suricata-Main)<br>> >> >> >> >> >> > total-vm:135646364kB,<br>> >> >> >> >> >> > anon-rss:108513440kB, file-rss:21329088kB<br>> >> >> >> >> >> ><br>> >> >> >> >> >> > I wouldnt be suprised that my buffers are set too big.<br>> >> >> >> >> >> > I am just not clear on some sections on how much RAM they<br>> >> >> >> >> >> > use.<br>> >> >> >> >> >> > and also for stream section, do you need to add memcap and<br>> >> >> >> >> >> > reassembly<br>> >> >> >> >> >> > buffers together or are they part of each other? As far as I<br>> >> >> >> >> >> > understand<br>> >> >> >> >> >> > reassembly buffer needs to be higher than memcap.<br>> >> >> >> >> >> ><br>> >> >> >> >> >> > I have 132gb of RAM. When suricata starts, it is using 64gb<br>> >> >> >> >> >><br>> >> >> >> >> >><br>> >> >> >> >> >> Which Suricata version are you using?<br>> >> >> >> >> >> What is the total memcap sum values in your suricata.yaml?<br>> >> >> >> >> >><br>> >> >> >> >> >><br>> >> >> >> >> >> ><br>> >> >> >> >> >> >> Date: Thu, 20 Nov 2014 18:21:54 +0100<br>> >> >> >> >> >> >> Subject: Re: [Oisf-users] Memory Allocations<br>> >> >> >> >> >> >> From: petermanev@gmail.com<br>> >> >> >> >> >> >> To: coolyasha@hotmail.com<br>> >> >> >> >> >> >> CC: oisf-users@lists.openinfosecfoundation.org<br>> >> >> >> >> >> ><br>> >> >> >> >> >> >><br>> >> >> >> >> >> >> On Mon, Nov 17, 2014 at 3:45 PM, Yasha Zislin<br>> >> >> >> >> >> >> <coolyasha@hotmail.com><br>> >> >> >> >> >> >> wrote:<br>> >> >> >> >> >> >> > I am having issues with Suricata crashing due to running<br>> >> >> >> >> >> >> > out<br>> >> >> >> >> >> >> > of<br>> >> >> >> >> >> >> > memory.<br>> >> >> >> >> >> >> > I just wanted to clarify certain sections of config that<br>> >> >> >> >> >> >> > I<br>> >> >> >> >> >> >> > am<br>> >> >> >> >> >> >> > doing<br>> >> >> >> >> >> >> > my<br>> >> >> >> >> >> >> > calculations correctly.<br>> >> >> >> >> >> >> ><br>> >> >> >> >> >> >> > max-pending-packets 65000 ------- Does that use a lot of<br>> >> >> >> >> >> >> > Ram?<br>> >> >> >> >> >> >> ><br>> >> >> >> >> >> >> > So for defrag and flow sections, whatever memcap values I<br>> >> >> >> >> >> >> > set,<br>> >> >> >> >> >> >> > that's<br>> >> >> >> >> >> >> > what<br>> >> >> >> >> >> >> > the maximum that can be used, correct?<br>> >> >> >> >> >> >> ><br>> >> >> >> >> >> >> > Stream section is a bit unclear to me. Memcap for Stream<br>> >> >> >> >> >> >> > and<br>> >> >> >> >> >> >> > Memcap<br>> >> >> >> >> >> >> > for<br>> >> >> >> >> >> >> > Reassembly, how do they relate? Which one should be<br>> >> >> >> >> >> >> > bigger?<br>> >> >> >> >> >> >> ><br>> >> >> >> >> >> >> > Host section, once again, memcap is the maximum RAM that<br>> >> >> >> >> >> >> > would<br>> >> >> >> >> >> >> > be<br>> >> >> >> >> >> >> > used?<br>> >> >> >> >> >> >> ><br>> >> >> >> >> >> >> > And lastly, libhtp section, request and response<br>> >> >> >> >> >> >> > -body-limit<br>> >> >> >> >> >> >> > values,<br>> >> >> >> >> >> >> > is<br>> >> >> >> >> >> >> > that<br>> >> >> >> >> >> >> > maximum memory utilization of LIBHTP?<br>> >> >> >> >> >> >> ><br>> >> >> >> >> >> >> > Thanks.<br>> >> >> >> >> >> >> ><br>> >> >> >> >> >> >><br>> >> >> >> >> >> >><br>> >> >> >> >> >> >> Hi,<br>> >> >> >> >> >> >><br>> >> >> >> >> >> >> You mean you are running into swap, correct?<br>> >> >> >> >> >> >><br>> >> >> >> >> >> >> If you sum up all the memcap values you have given in<br>> >> >> >> >> >> >> suricata.yaml<br>> >> >> >> >> >> >> -<br>> >> >> >> >> >> >> would that be less than what you actually have as RAM on<br>> >> >> >> >> >> >> the<br>> >> >> >> >> >> >> server<br>> >> >> >> >> >> >> running Suricata?<br>> >> >> >> >> >> >><br>> >> >> >> >> >> >> Thank you<br>> >> >> >> >> >> >><br>> >> >> >> >> >> >><br>> >> >> >> >> >> >> --<br>> >> >> >> >> >> >> Regards,<br>> >> >> >> >> >> >> Peter Manev<br>> >> >> >> >> >><br>> >> >> >> >> >><br>> >> >> >> >> >><br>> >> >> >> >> >> --<br>> >> >> >> >> >> Regards,<br>> >> >> >> >> >> Peter Manev<br>> >> >> >> >><br>> >> >> >> >><br>> >> >> >> >><br>> >> >> >> >> --<br>> >> >> >> >> Regards,<br>> >> >> >> >> Peter Manev<br>> >> >> >><br>> >> >> >><br>> >> >> >><br>> >> >> >> --<br>> >> >> >> Regards,<br>> >> >> >> Peter Manev<br>> >> >><br>> >> >><br>> >> >><br>> >> >> --<br>> >> >> Regards,<br>> >> >> Peter Manev<br>> >><br>> >><br>> >><br>> >> --<br>> >> Regards,<br>> >> Peter Manev<br>> <br>> <br>> <br>> -- <br>> Regards,<br>> Peter Manev<br></div></div> </div></body>
</html>