[Oisf-users] Memory Allocations

Peter Manev petermanev at gmail.com
Wed Nov 26 14:21:35 UTC 2014


On Wed, Nov 26, 2014 at 3:19 PM, Yasha Zislin <coolyasha at hotmail.com> wrote:
> I would agree with you that my server is big for my traffic but nevertheless
> I keep getting closer to my RAM limit.
> I mean, PF RING takes a lot of RAM, my stream reassembly is huge and others.
>
> So far it didnt crash for two days. I will keep on tweaking.
>
> Thanks for all of your input.
>
> I am posting back to community.

Cool - Thanks!

>
>> Date: Tue, 25 Nov 2014 21:11:04 +0100
>
>> Subject: Re: [Oisf-users] Memory Allocations
>> From: petermanev at gmail.com
>> To: coolyasha at hotmail.com
>>
>> On Tue, Nov 25, 2014 at 7:34 PM, Yasha Zislin <coolyasha at hotmail.com>
>> wrote:
>> > Good to know about gaps. I guess if Suricata is not reporting packet
>> > loss
>> > but gaps increase, it is for other reasons outside of our control.
>> >
>> > I use workers runmode. Latest PFRING. I've tested other modes and packet
>> > capture and these two gave me the best results.
>> > My packet loss is almost 0%. Occasionally, maybe once a day I get a
>> > burst of
>> > packet loss of couple of million packets at once. Ideally I want it to
>> > be at
>> > 0% but we get 10 mil packets per minute on both of the SPAN ports. If
>> > Suricata can stay up without running out of RAM, these packet loss would
>> > be
>> > probably acceptable.
>> > I have two 10gig fiber nics : Broadcom Corporation Netxtreme II BCM57810
>> >
>> > I dont think I can use CPU affinity since it only applies to Intel
>> > cards. I
>> > saw couple of your articles about that. I've tried it on mine, but such
>> > settings.
>> >
>> > I have 2 x Intel Xeon CPU E5-2690 (3ghz) CPUs which makes 40 logical.
>> > CPU doesnt get overutilized. I have 20 pfring threads per SPAN port.
>> > After
>> > trying various combinations, this one was the best fit.
>>
>>
>> This set up can handle 4x times the traffic in my opinion :) - so you
>> should not be worried in that respect I think :)
>> Please let the list know if you have overcome your OOM troubles or if
>> you need any further help :)
>>
>> Thank you
>>
>> >
>> >> Date: Mon, 24 Nov 2014 23:03:41 +0100
>> >
>> >> Subject: Re: [Oisf-users] Memory Allocations
>> >> From: petermanev at gmail.com
>> >> To: coolyasha at hotmail.com
>> >>
>> >> On Mon, Nov 24, 2014 at 9:19 PM, Yasha Zislin <coolyasha at hotmail.com>
>> >> wrote:
>> >> > Good to know. I will decrease my flow values and stream.
>> >> > My goal is to avoid packet loss.
>> >> > What kind of counters should I be looking at when stream memcap gets
>> >> > maxed
>> >> > out?
>> >> > I get a lot of these:
>> >> > tcp.reassembly_gap | RxPFReth219 | 473308
>> >> >
>> >>
>> >> gaps are streams that have... gaps :) thus not allowing for proper
>> >> inspection.
>> >> Gaps can occur for various of reasons related to port mirroring and or
>> >> traffic loss on the way to the mirror port and other...
>> >>
>> >> > As far as I know, these are lost packets.
>> >> >
>> >> > As far as my traffic goes:
>> >> > - I monitor 2 SPAN ports (each is a 10gig card but traffic doesnt go
>> >> > over
>> >> > 2gigs).
>> >> > - I use 20 logical CPUs for each SPAN Port.
>> >> > - 97% of the traffic is HTTP
>> >> >
>> >>
>> >> What do you use for runmode? afpacket/pfring?
>> >> What is your packet loss? (percentage wise)
>> >> What is your NIC model?(do you use CPU affinity)
>> >> What are your CPUs?
>> >>
>> >> > So in my mind, stream, flow and timeouts are what is important and
>> >> > can
>> >> > cause
>> >> > packet loss.
>> >> >
>> >> > As far as RAM usage, suricata does crash. I get a lot of kernel
>> >> > messages
>> >> > in
>> >> > /var/log/messages when it dies.
>> >> > Such as:
>> >> > kernel: Out of memory: Kill process 4094 (RxPFReth04) score 982 or
>> >> > sacrifice
>> >> > child
>> >> > Judging from TOP, SWAP barely gets used.
>> >>
>> >>
>> >> yep ..that looks like OOM
>> >>
>> >>
>> >> >
>> >> > Oh, and I am not using 2.1 version of Suricata. I am on 2.0.4
>> >> >
>> >> > I appreciate your time helping me. Not everything is available in the
>> >> > documentation. Trying to learn this wonderful IDS solution :)
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >> Date: Mon, 24 Nov 2014 20:00:03 +0100
>> >> >
>> >> >> Subject: Re: [Oisf-users] Memory Allocations
>> >> >> From: petermanev at gmail.com
>> >> >> To: coolyasha at hotmail.com
>> >> >>
>> >> >> On Mon, Nov 24, 2014 at 7:47 PM, Yasha Zislin
>> >> >> <coolyasha at hotmail.com>
>> >> >> wrote:
>> >> >> > For reassembly, chunk-prealloc needs to be greater or equal than
>> >> >> > the
>> >> >> > sum
>> >> >> > of
>> >> >> > all segements prealloc.
>> >> >> > To get total reassembly ram needs sum up each segment's (size x
>> >> >> > prealloc).
>> >> >> > I've selected appropriate prealloc for each segment size after
>> >> >> > running
>> >> >> > suricata for a while and in debug mode it tells you if specific
>> >> >> > segment
>> >> >> > size
>> >> >> > was exceeded.
>> >> >> > So with these values, I need a bit under 40gb, which is fine.
>> >> >> > For regular stream memcap, I have 2mil prealloc-sessions and i
>> >> >> > didnt
>> >> >> > set
>> >> >> > hash-size. I assume it is using default.
>> >> >> > I keep decreasing my memcap and right now it is 10gb. Sounds like
>> >> >> > i
>> >> >> > can
>> >> >> > go
>> >> >> > down all the way to like 3 gb.
>> >> >>
>> >> >> You should further lower it - more info why -
>> >> >>
>> >> >>
>> >> >>
>> >> >> http://pevma.blogspot.se/2014/08/suricata-flows-flow-managers-and-effect.html
>> >> >>
>> >> >>
>> >> >> >
>> >> >> > Now, I wonder because of which buffer my suricata runs out of
>> >> >> > memory.
>> >> >>
>> >> >> Suricata does not run out of memory. Suricata I suspect uses all the
>> >> >> memory allocated to it ...which is more than what the box has -
>> >> >> hence
>> >> >> goes into swap.
>> >> >>
>> >> >> >
>> >> >> > So after Suricata gets started and allocates whatever RAM for
>> >> >> > itself,
>> >> >> > which
>> >> >> > buffers are not fully allocated? It sounds like stream reassembly
>> >> >> > buffer
>> >> >> > doesnt get utilized right away.
>> >> >> >
>> >> >> > What about flows? isn't that kind of important? But judging from
>> >> >> > stats.log I
>> >> >> > dont have any negative entries for flow:
>> >> >> > flow_mgr.closed_pruned | FlowManagerThread | 122808768
>> >> >> > flow_mgr.new_pruned | FlowManagerThread | 23052773
>> >> >> > flow_mgr.est_pruned | FlowManagerThread | 17538938
>> >> >> > flow.memuse | FlowManagerThread | 11786200760
>> >> >> > flow.spare | FlowManagerThread | 40006294
>> >> >> > flow.emerg_mode_entered | FlowManagerThread | 0
>> >> >> > flow.emerg_mode_over | FlowManagerThread | 0
>> >> >>
>> >> >> As long as you do not enter emergency mode you are ok.(so keep
>> >> >> lowering
>> >> >> it)
>> >> >>
>> >> >> >
>> >> >> >
>> >> >> > For config, I have
>> >> >> > flow:
>> >> >> > memcap: 15gb
>> >> >> > hash-size: 3000000
>> >> >> > prealloc: 40000000
>> >> >> > emergency-recovery: 30
>> >> >> >
>> >> >>
>> >> >>
>> >> >> this is a seriously excessive setting for flow - please read my
>> >> >> comments
>> >> >> above.
>> >> >>
>> >> >> > And also, timeouts
>> >> >> > flow-timeouts:
>> >> >> >
>> >> >> > default:
>> >> >> > new: 3
>> >> >> > established: 30
>> >> >> > closed: 0
>> >> >> > emergency-new: 10
>> >> >> > emergency-established: 10
>> >> >> > emergency-closed: 0
>> >> >> > tcp:
>> >> >> > new: 6
>> >> >> > established: 100
>> >> >> > closed: 0
>> >> >> > emergency-new: 1
>> >> >> > emergency-established: 5
>> >> >> > emergency-closed: 2
>> >> >> > udp:
>> >> >> > new: 3
>> >> >> > established: 30
>> >> >> > emergency-new: 3
>> >> >> > emergency-established: 10
>> >> >> > icmp:
>> >> >> > new: 3
>> >> >> > established: 30
>> >> >> > emergency-new: 1
>> >> >> > emergency-established: 10
>> >> >> >
>> >> >>
>> >> >> The timeouts are fine in my opinion. But then again I do not know
>> >> >> what
>> >> >> type of traffic you are monitoring and how much of it.
>> >> >>
>> >> >> > Thanks.
>> >> >> >
>> >> >> >
>> >> >> >> Date: Mon, 24 Nov 2014 19:01:35 +0100
>> >> >> >
>> >> >> >> Subject: Re: [Oisf-users] Memory Allocations
>> >> >> >> From: petermanev at gmail.com
>> >> >> >> To: coolyasha at hotmail.com
>> >> >> >>
>> >> >> >> On Fri, Nov 21, 2014 at 4:25 PM, Yasha Zislin
>> >> >> >> <coolyasha at hotmail.com>
>> >> >> >> wrote:
>> >> >> >> > Cool. Good to know that these are separate from each other.
>> >> >> >> > So I understand how to calculate reassembly memcap (from your
>> >> >> >> > article
>> >> >> >> > I
>> >> >> >> > believe).
>> >> >> >> > I am not 100% clear how to calculate stream memcap.
>> >> >> >>
>> >> >> >> How did you calculate reassembly memcap?
>> >> >> >>
>> >> >> >>
>> >> >> >> >
>> >> >> >> > Also, I cant seem to figure out how much flow memcap I need. I
>> >> >> >> > know
>> >> >> >> > how
>> >> >> >> > it
>> >> >> >> > is being calculated with hash size and prealloc, but I dont
>> >> >> >> > know
>> >> >> >> > how
>> >> >> >> > much
>> >> >> >> > hash-size and prealloc I need. Looking at stats.log, I always
>> >> >> >> > have
>> >> >> >> > flow
>> >> >> >> > spare values.
>> >> >> >>
>> >> >> >> Usually 1GB with 1mil prealloc is ok . Keep an eye on your
>> >> >> >> emergency
>> >> >> >> mode in your stats.log - as long as you do not enter that you are
>> >> >> >> good.
>> >> >> >> hash-size: 1048576
>> >> >> >> prealloc: 1048576
>> >> >> >>
>> >> >> >> >
>> >> >> >> > Can you advise me on these two?
>> >> >> >> >
>> >> >> >> > Thank you very much.
>> >> >> >> >
>> >> >> >> >
>> >> >> >> >
>> >> >> >> >> Date: Fri, 21 Nov 2014 09:44:04 +0100
>> >> >> >> >
>> >> >> >> >> Subject: Re: [Oisf-users] Memory Allocations
>> >> >> >> >> From: petermanev at gmail.com
>> >> >> >> >> To: coolyasha at hotmail.com
>> >> >> >> >>
>> >> >> >> >> On Thu, Nov 20, 2014 at 7:45 PM, Yasha Zislin
>> >> >> >> >> <coolyasha at hotmail.com>
>> >> >> >> >> wrote:
>> >> >> >> >> > That's what I was asking in the original question :) do
>> >> >> >> >> > these
>> >> >> >> >> > values
>> >> >> >> >> > sum
>> >> >> >> >> > up
>> >> >> >> >> > or are part of each other?
>> >> >> >> >> > so which one is used for segments? or what is the purpose of
>> >> >> >> >> > both?
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >> OK - apologies I didn't understand your question.
>> >> >> >> >> Yes they are separate independent values.
>> >> >> >> >>
>> >> >> >> >> Reassembly memcap is with respect to all segments that need
>> >> >> >> >> reassembly.
>> >> >> >> >> Stream memcap is for .. stream :) - ones that are of no need
>> >> >> >> >> for
>> >> >> >> >> reassembly
>> >> >> >> >>
>> >> >> >> >> >
>> >> >> >> >> >
>> >> >> >> >> >
>> >> >> >> >> > On 20 nov 2014, at 19:34, Yasha Zislin
>> >> >> >> >> > <coolyasha at hotmail.com>
>> >> >> >> >> > wrote:
>> >> >> >> >> >
>> >> >> >> >> > Here is my section for stream
>> >> >> >> >> >
>> >> >> >> >> > stream:
>> >> >> >> >> > memcap: 60gb
>> >> >> >> >> > checksum-validation: no # reject wrong csums
>> >> >> >> >> > inline: no # auto will use inline mode in IPS mode, yes
>> >> >> >> >> > or no set it statically
>> >> >> >> >> > prealloc-sessions: 2000000
>> >> >> >> >> > midstream: false
>> >> >> >> >> > asyn-oneside: false
>> >> >> >> >> > reassembly:
>> >> >> >> >> > memcap: 80gb
>> >> >> >> >> >
>> >> >> >> >> >
>> >> >> >> >> > You have 60gb for stream and 80gb for reassembly = 140 ...
>> >> >> >> >> > You
>> >> >> >> >> > are
>> >> >> >> >> > right
>> >> >> >> >> > away going into swap with this (if the memcaps are reached)
>> >> >> >> >> > as
>> >> >> >> >> > compared
>> >> >> >> >> > to
>> >> >> >> >> > 132gb ram that you have on the box.
>> >> >> >> >> >
>> >> >> >> >> >
>> >> >> >> >> > depth: 20mb # reassemble 1mb into a stream
>> >> >> >> >> > toserver-chunk-size: 2560
>> >> >> >> >> > toclient-chunk-size: 2560
>> >> >> >> >> > randomize-chunk-size: yes
>> >> >> >> >> > #randomize-chunk-range: 10
>> >> >> >> >> > #raw: yes
>> >> >> >> >> > chunk-prealloc: 3000000
>> >> >> >> >> > segments:
>> >> >> >> >> > - size: 4
>> >> >> >> >> > prealloc: 15000
>> >> >> >> >> > - size: 16
>> >> >> >> >> > prealloc: 200000
>> >> >> >> >> > - size: 112
>> >> >> >> >> > prealloc: 400000
>> >> >> >> >> > - size: 248
>> >> >> >> >> > prealloc: 300000
>> >> >> >> >> > - size: 512
>> >> >> >> >> > prealloc: 200000
>> >> >> >> >> > - size: 768
>> >> >> >> >> > prealloc: 100000
>> >> >> >> >> > - size: 1448
>> >> >> >> >> > prealloc: 1000000
>> >> >> >> >> > - size: 65535
>> >> >> >> >> > prealloc: 400000
>> >> >> >> >> >
>> >> >> >> >> >
>> >> >> >> >> > ________________________________
>> >> >> >> >> > Subject: Re: [Oisf-users] Memory Allocations
>> >> >> >> >> > From: petermanev at gmail.com
>> >> >> >> >> > Date: Thu, 20 Nov 2014 19:32:51 +0100
>> >> >> >> >> > To: coolyasha at hotmail.com
>> >> >> >> >> >
>> >> >> >> >> > Hi,
>> >> >> >> >> >
>> >> >> >> >> > What is the "reassembly" memcap?
>> >> >> >> >> > Thanks
>> >> >> >> >> >
>> >> >> >> >> > Regards,
>> >> >> >> >> > Peter Manev
>> >> >> >> >> >
>> >> >> >> >> > On 20 nov 2014, at 19:16, Yasha Zislin
>> >> >> >> >> > <coolyasha at hotmail.com>
>> >> >> >> >> > wrote:
>> >> >> >> >> >
>> >> >> >> >> > I am running 2.0.4.
>> >> >> >> >> >
>> >> >> >> >> > Here are ALL memcap values:
>> >> >> >> >> > defrag - 2gb
>> >> >> >> >> > flow - 15gb
>> >> >> >> >> > Stream - 60gb
>> >> >> >> >> > Host - 10gb
>> >> >> >> >> > === 87GB
>> >> >> >> >> >
>> >> >> >> >> > I've created such high memcap for Stream because it seems to
>> >> >> >> >> > help
>> >> >> >> >> > with
>> >> >> >> >> > packet loss.
>> >> >> >> >> > I am monitoring two interfaces with 10million packets per
>> >> >> >> >> > minute
>> >> >> >> >> > on
>> >> >> >> >> > each
>> >> >> >> >> > on
>> >> >> >> >> > average. Most traffic is HTTP.
>> >> >> >> >> > Can you recommend memcaps for each section?
>> >> >> >> >> >
>> >> >> >> >> > Thanks.
>> >> >> >> >> >
>> >> >> >> >> >
>> >> >> >> >> >> Date: Thu, 20 Nov 2014 18:58:26 +0100
>> >> >> >> >> >> Subject: Re: [Oisf-users] Memory Allocations
>> >> >> >> >> >> From: petermanev at gmail.com
>> >> >> >> >> >> To: coolyasha at hotmail.com
>> >> >> >> >> >> CC: oisf-users at lists.openinfosecfoundation.org
>> >> >> >> >> >>
>> >> >> >> >> >> On Thu, Nov 20, 2014 at 6:50 PM, Yasha Zislin
>> >> >> >> >> >> <coolyasha at hotmail.com>
>> >> >> >> >> >> wrote:
>> >> >> >> >> >> > I dont know if swap starts to be used by Suricata crashes
>> >> >> >> >> >> > after
>> >> >> >> >> >> > couple
>> >> >> >> >> >> > of
>> >> >> >> >> >> > days of running.
>> >> >> >> >> >> > In system logs, I have kernel messages such as this:
>> >> >> >> >> >> > kernel: RxPFReth22 invoked oom-killer: gfp_mask=0x201da,
>> >> >> >> >> >> > order=0,
>> >> >> >> >> >> > oom_adj=0,
>> >> >> >> >> >> > oom_score_adj=0
>> >> >> >> >> >> > kernel: RxPFReth22 cpuset=/ mems_allowed=0-1
>> >> >> >> >> >> > kernel: Pid: 60417, comm: RxPFReth22 Not tainted
>> >> >> >> >> >> > 2.6.32-504.el6.x86_64
>> >> >> >> >> >> > #1
>> >> >> >> >> >> >
>> >> >> >> >> >> > Then after a ton of stack traces and memory errors, I see
>> >> >> >> >> >> > this:
>> >> >> >> >> >> > kernel: Out of memory: Kill process 59782 (Suricata-Main)
>> >> >> >> >> >> > score
>> >> >> >> >> >> > 985
>> >> >> >> >> >> > or
>> >> >> >> >> >> > sacrifice child
>> >> >> >> >> >> > Killed process 59782, UID 501, (Suricata-Main)
>> >> >> >> >> >> > total-vm:135646364kB,
>> >> >> >> >> >> > anon-rss:108513440kB, file-rss:21329088kB
>> >> >> >> >> >> >
>> >> >> >> >> >> > I wouldnt be suprised that my buffers are set too big.
>> >> >> >> >> >> > I am just not clear on some sections on how much RAM they
>> >> >> >> >> >> > use.
>> >> >> >> >> >> > and also for stream section, do you need to add memcap
>> >> >> >> >> >> > and
>> >> >> >> >> >> > reassembly
>> >> >> >> >> >> > buffers together or are they part of each other? As far
>> >> >> >> >> >> > as I
>> >> >> >> >> >> > understand
>> >> >> >> >> >> > reassembly buffer needs to be higher than memcap.
>> >> >> >> >> >> >
>> >> >> >> >> >> > I have 132gb of RAM. When suricata starts, it is using
>> >> >> >> >> >> > 64gb
>> >> >> >> >> >>
>> >> >> >> >> >>
>> >> >> >> >> >> Which Suricata version are you using?
>> >> >> >> >> >> What is the total memcap sum values in your suricata.yaml?
>> >> >> >> >> >>
>> >> >> >> >> >>
>> >> >> >> >> >> >
>> >> >> >> >> >> >> Date: Thu, 20 Nov 2014 18:21:54 +0100
>> >> >> >> >> >> >> Subject: Re: [Oisf-users] Memory Allocations
>> >> >> >> >> >> >> From: petermanev at gmail.com
>> >> >> >> >> >> >> To: coolyasha at hotmail.com
>> >> >> >> >> >> >> CC: oisf-users at lists.openinfosecfoundation.org
>> >> >> >> >> >> >
>> >> >> >> >> >> >>
>> >> >> >> >> >> >> On Mon, Nov 17, 2014 at 3:45 PM, Yasha Zislin
>> >> >> >> >> >> >> <coolyasha at hotmail.com>
>> >> >> >> >> >> >> wrote:
>> >> >> >> >> >> >> > I am having issues with Suricata crashing due to
>> >> >> >> >> >> >> > running
>> >> >> >> >> >> >> > out
>> >> >> >> >> >> >> > of
>> >> >> >> >> >> >> > memory.
>> >> >> >> >> >> >> > I just wanted to clarify certain sections of config
>> >> >> >> >> >> >> > that
>> >> >> >> >> >> >> > I
>> >> >> >> >> >> >> > am
>> >> >> >> >> >> >> > doing
>> >> >> >> >> >> >> > my
>> >> >> >> >> >> >> > calculations correctly.
>> >> >> >> >> >> >> >
>> >> >> >> >> >> >> > max-pending-packets 65000 ------- Does that use a lot
>> >> >> >> >> >> >> > of
>> >> >> >> >> >> >> > Ram?
>> >> >> >> >> >> >> >
>> >> >> >> >> >> >> > So for defrag and flow sections, whatever memcap
>> >> >> >> >> >> >> > values I
>> >> >> >> >> >> >> > set,
>> >> >> >> >> >> >> > that's
>> >> >> >> >> >> >> > what
>> >> >> >> >> >> >> > the maximum that can be used, correct?
>> >> >> >> >> >> >> >
>> >> >> >> >> >> >> > Stream section is a bit unclear to me. Memcap for
>> >> >> >> >> >> >> > Stream
>> >> >> >> >> >> >> > and
>> >> >> >> >> >> >> > Memcap
>> >> >> >> >> >> >> > for
>> >> >> >> >> >> >> > Reassembly, how do they relate? Which one should be
>> >> >> >> >> >> >> > bigger?
>> >> >> >> >> >> >> >
>> >> >> >> >> >> >> > Host section, once again, memcap is the maximum RAM
>> >> >> >> >> >> >> > that
>> >> >> >> >> >> >> > would
>> >> >> >> >> >> >> > be
>> >> >> >> >> >> >> > used?
>> >> >> >> >> >> >> >
>> >> >> >> >> >> >> > And lastly, libhtp section, request and response
>> >> >> >> >> >> >> > -body-limit
>> >> >> >> >> >> >> > values,
>> >> >> >> >> >> >> > is
>> >> >> >> >> >> >> > that
>> >> >> >> >> >> >> > maximum memory utilization of LIBHTP?
>> >> >> >> >> >> >> >
>> >> >> >> >> >> >> > Thanks.
>> >> >> >> >> >> >> >
>> >> >> >> >> >> >>
>> >> >> >> >> >> >>
>> >> >> >> >> >> >> Hi,
>> >> >> >> >> >> >>
>> >> >> >> >> >> >> You mean you are running into swap, correct?
>> >> >> >> >> >> >>
>> >> >> >> >> >> >> If you sum up all the memcap values you have given in
>> >> >> >> >> >> >> suricata.yaml
>> >> >> >> >> >> >> -
>> >> >> >> >> >> >> would that be less than what you actually have as RAM on
>> >> >> >> >> >> >> the
>> >> >> >> >> >> >> server
>> >> >> >> >> >> >> running Suricata?
>> >> >> >> >> >> >>
>> >> >> >> >> >> >> Thank you
>> >> >> >> >> >> >>
>> >> >> >> >> >> >>
>> >> >> >> >> >> >> --
>> >> >> >> >> >> >> Regards,
>> >> >> >> >> >> >> Peter Manev
>> >> >> >> >> >>
>> >> >> >> >> >>
>> >> >> >> >> >>
>> >> >> >> >> >> --
>> >> >> >> >> >> Regards,
>> >> >> >> >> >> Peter Manev
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >> --
>> >> >> >> >> Regards,
>> >> >> >> >> Peter Manev
>> >> >> >>
>> >> >> >>
>> >> >> >>
>> >> >> >> --
>> >> >> >> Regards,
>> >> >> >> Peter Manev
>> >> >>
>> >> >>
>> >> >>
>> >> >> --
>> >> >> Regards,
>> >> >> Peter Manev
>> >>
>> >>
>> >>
>> >> --
>> >> Regards,
>> >> Peter Manev
>>
>>
>>
>> --
>> Regards,
>> Peter Manev



-- 
Regards,
Peter Manev



More information about the Oisf-users mailing list