[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [sc-users] Server time scheduling



Hi all,

I am currently very busy with this, with help of a set of sample-based scheduling ugens from blackrain. I'm re-engineering our wfs system to work with this. It works with a ugen which outputs the amount of passed control blocks since server boot time. Offsets between these counts are measured for all remote server apps from one master server using one digital audio pulse (over ADAT connection). WordClock provides a very stable way of keeping the offsets correct. The way I'm implementing it now is to set up a paused synth, and then run another synth, which unpauses the first one at a certain absolute block position, which I cook up from thisThread.seconds and the measured offsets. To that I add the sample-offset within the block with a delay to provide sample tight sync. I've already managed to start synths on two remote machines simultaneously with a 0.1s latency with this, and wil probably reach the same low latency as with regular scheduled events on a local server. I'm installing it on our system this week for further testing.

One of the main issues of this method is the 32bits floating point precision. This is why we are using block counts instead of sample counts. Block counts remain accurate for apx. 6.5 hours at a blocksize of 64 samples. Sample count would only reach 1/64 of that time before the max precision is reached. Next to that there seem to be some issues with skipped samples at specific counting values.

Blackrain is actually quite far with implementing a block-count based scheduling cue in the server, providing this functionality for bundled messages without the need to setup paused synths and with the option of using OffsetOut. But maybe that should be explained by the master himself :-)

cheers,
Wouter

Op 1 dec 2008, om 18:16 heeft nescivi het volgende geschreven:

Hiho,

On Monday 01 December 2008 09:56:12 Sciss wrote:
should i repeat that an audio driver samplerate based client clock
along with an scsynth option to interprete bundle times according to
that sample-clock (instead of CPU clock) would solve a few timing
problems? whoever invents that is going to be hero. BTW: i remember
all those discussion about the network time used in OSC and how
difficult is to get complete sync in the real world. i guess
wordclock cabled audio-interfaces provide exactly that kind of clock
to have total sync in a networked setup.

wonder (swonder.sourceforge.net) does something like that for its timing between the control program (cwonder) and the renderer (twonder), using a support app which counts the jack frames (jfwonder). It seems to be fairly
tight.

sincerely,
Marije

_______________________________________________
sc-users mailing list

info (subscription, etc.): http://www.beast.bham.ac.uk/research/sc_mailing_lists.shtml
archive: https://listarc.bham.ac.uk/marchives/sc-users/
search: https://listarc.bham.ac.uk/lists/sc-users/search/


_______________________________________________
sc-users mailing list

info (subscription, etc.): http://www.beast.bham.ac.uk/research/sc_mailing_lists.shtml
archive: https://listarc.bham.ac.uk/marchives/sc-users/
search: https://listarc.bham.ac.uk/lists/sc-users/search/