[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [sc-dev] Another potential Buffer method *streamCollection



ok, just pasting that class method into Buffer.sc and
executing the code example you posted, i got this
error

2000000
ERROR: Message 'addToServerArray' not understood.

 this is from a tonight CVS update 

 --- Charlls Quarra <charlls_quarra@xxxxxxxxxxxx>
escribió: 
> 
>  I know i can just paste the code inside the Buffer
> declaration, but just to not do something you
> already
> did, i was wondering if you already commited this to
> CVS
> 
> 
>  --- Scott Wilson <sdwilson@xxxxxxxxxxxx> escribió: 
> > I had considered wrapping this all up in setn, but
> > decided against it. 
> > setn as it stands is pretty close to its OSC
> > equivalent, and what I've 
> > implemented does have more overhead. I figure you
> > pretty much should 
> > know how big the collection is that you're trying
> to
> > send as if you 
> > don't it might be bigger than the buffer anyway.
> If
> > in doubt however 
> > one could use my method to be sure. If it's
> smaller
> > than the packet 
> > limit it will go in one OSC message.
> > 
> > Does anyone feel strongly that this should be
> > wrapped into setn?
> > 
> > The other thing is that by having separate methods
> > it makes it clear 
> > what's happening. I'm going to add instance method
> > versions of both 
> > this and the file writing method I wrote, so that
> > gives you choice with 
> > large data sets on a local machine.
> > 
> > S.
> > 
> > On 22 Nov 2004, at 18:33, Charlls Quarra wrote:
> > 
> > >
> > > This sounds great. It occurred to me that is not
> > good
> > > design to not provide the chunk splitting
> > > functionality already in the Buffer.setn method,
> > or at
> > > least it didnt occurred to me a reason why one
> > would
> > > not want such splitting to be done there (unless
> > this
> > > is some kind of optimization, but no idea
> really)
> > >
> > >
> > >  --- Scott Wilson <sdwilson@xxxxxxxxxxxx>
> > escribió:
> > >> Okay, since the fromCollection method I added
> in
> > my
> > >> previous candidate
> > >> won't work for sending large amounts of data to
> a
> > >> non-local machine, I
> > >> thought I'd try to come up with a method which
> > chops
> > >> it into convenient
> > >> chunks:
> > >>
> > >> 	*streamCollection { arg server, collection,
> wait
> > =
> > >> 0.0, action;
> > >> 		var collstream, buffer, func, collsize,
> bufnum,
> > >> bundsize, pos;
> > >> 		
> > >> 		collstream = CollStream.new;
> > >> 		collstream.collection = collection;
> > >> 		collsize = collection.size.postln;
> > >> 		server = server ? Server.default;
> > >> 		bufnum = server.bufferAllocator.alloc(1);
> > >> 		buffer = super.newCopyArgs(server, bufnum,
> > >> collection.size, 1)
> > >> 		
> > .addToServerArray.sampleRate_(server.sampleRate);
> > >> 		
> > >> 		// this will wait for synced.
> > >> 		{
> > >> 			// 1626 largest setn size with an alloc
> > >> 			bundsize = min(1626, collsize -
> > collstream.pos);
> > >> 		
> server.listSendMsg(buffer.allocMsg({["b_setn",
> > >> bufnum, 0, bundsize]
> > >> 				++ Array.fill(bundsize,
> > {collstream.next})}));
> > >>
> > >> 			// wait = 0 might not be safe
> > >> 			// maybe okay with tcp
> > >> 			pos = collstream.pos;
> > >> 			while({pos < collsize}, {
> > >> 				wait.wait;
> > >> 				// 1633 max size for setn under udp
> > >> 				bundsize = min(1633, collsize -
> > collstream.pos);
> > >> 				server.listSendMsg(["b_setn", bufnum, pos,
> > >> bundsize]
> > >> 					++ Array.fill(bundsize,
> {collstream.next}));
> > >> 				pos = collstream.pos;
> > >> 			});
> > >> 			
> > >> 			action.value(buffer);
> > >>
> > >> 		}.fork(SystemClock);
> > >> 		
> > >> 		^buffer;
> > >> 	}
> > >>
> > >> Then you can go:
> > >>
> > >> s.boot;
> > >>
> > >> (
> > >> a = Array.fill(2000000,{ rrand(0.0,1.0) });
> > >> c = CollStream.new;
> > >> c.collection = a;
> > >> b = Buffer.streamCollection(s, a, 0.0, {arg
> buf;
> > >> "finished".postln;});
> > >> )
> > >> b.get(1999999, {|msg| [msg ,
> > a[1999999]].postln});
> > >>
> > >> b.free;
> > >>
> > >> I thought the wait time would be safest, but
> this
> > >> works with a 2000000
> > >> sized Array on my machine with a wait of 0, so
> > maybe
> > >> it's less of an
> > >> issue than I thought. Could maybe be more
> > elegant.
> > >>
> > >> Thoughts, comments, and criticisms appreciated.
> > >>
> > >> S.>
> > _______________________________________________
> > >> sc-dev mailing list
> > >> sc-dev@xxxxxxxxxxxxxxx
> > >>
> > http://www.create.ucsb.edu/mailman/listinfo/sc-dev
> > >>
> > >
> > > =====
> > > Running on:
> > > 1.5 Ghz P4
> > > 256Mb
> > > asus v800x chipset
> > > RH9 CCRMA-patched linux
> > >
> > >
> > > 	
> > >
> > > 	
> > > 		
> > > ___________________________________
> > > ¡Llevate a Yahoo! en tu Unifón!
> > > Ahora podés usar Yahoo! Messenger en tu Unifón,
> en
> > cualquier momento y 
> > > lugar.
> > > Encontrá más información en:
> > http://ar.mobile.yahoo.com/sms.html
> > >
> > > _______________________________________________
> > > sc-dev mailing list
> > > sc-dev@xxxxxxxxxxxxxxx
> > >
> http://www.create.ucsb.edu/mailman/listinfo/sc-dev
> > >
> > 
> > 
> > _______________________________________________
> > sc-dev mailing list
> > sc-dev@xxxxxxxxxxxxxxx
> > http://www.create.ucsb.edu/mailman/listinfo/sc-dev
> >  
> 
> =====
> Running on:
> 1.5 Ghz P4
> 256Mb 
> asus v800x chipset
> RH9 CCRMA-patched linux
> 
> 
> 	
> 
> 
=== message truncated === 

=====
Running on:
1.5 Ghz P4
256Mb 
asus v800x chipset
RH9 CCRMA-patched linux


	

	
		
___________________________________ 
¡Llevate a Yahoo! en tu Unifón! 
Ahora podés usar Yahoo! Messenger en tu Unifón, en cualquier momento y lugar. 
Encontrá más información en: http://ar.mobile.yahoo.com/sms.html