[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Sc-devel] [RFC] sample accurate scheduling



Hi,
we had that problem also with the wfs project. The work-around is indeed to have an impulse clock running on the server and i wrote a UGen, which unpauses a range of node-ids. But it would be great and much easier to have a sample-based osc time schedular on the server, which should'nt be that hard to implement (actually i started looking into this, but due to a lack of time haven't been working further on it). Instead of the relating to networktime the osc bundle would relate to sampletime taken from the hardware.
I guess that would also fix the fading issue Sciss is talking about.
Jan


On Nov 28, 2007, at 2:44 PM, Sciss wrote:

hi blainrain,

thanks for your considerations. in fact not before just after my post i realized the problem with CPU clock versus Audio-Interface clock that makes things really complicated. Can't the industry build for us audio-pros some kind of wordclock interface to slave the CPU clock to our Audio-IF ?? ;-))

Well, joking apart, maybe SystemClock could in fact be taking the clock from the Audio-Interface? Don't know how complicated that would be... If that works, i think the whole problem would be solved, no? (well.... if the client is not sclang, the problem is still there)

I think it would be good to collect problematic scenarios and match the proposed solution with them to see if they would solve them...

Here's one i've got:

- imagine a Multitrack-Editor with Audio-Regions placed on a timeline
- a region A is placed on the first track at time 0:00.000, its length be 10 seconds - I want to crossfade the region with a filtered version of itself, say it was bounced using an HPZ1 filter - so I place region B on the second track at 0:00.000, make a cut at 5 seconds (that's where i want it to begin to xfade), delete the first half - I apply fade out settings for 5 seconds to region A and fade in settings for 5 seconds to the cutted second half von region B

the problem is to schedule the playback of region B to be sample- synchronous with its region A counterpart, otherwise i get comb filter effects.


the only solution - which makes the application logic become really complicated - is to have a phasor synth run on the server side that drives the double-buffered BufRds. of course, as soon as other synths are involved (e.g. the HPZ1 was not rendered into the soundfile but is created in realtime), the problem of sample- accurate n_run scheduling comes back... again, a solution here could be (again difficult difficult application logic) to schedule the synth paused in advance and have a trigger UGen driven by the phasor that does the node-run 1 for the filter. So we would need something like addtional doneActions:
"15	resume all following nodes in this group"

etc.

??

ciao, -sciss-



Am 28.11.2007 um 04:33 schrieb blackrain:

Hello everyone,
After some emails and quite some tests I had with Mr Joshua Parmenter
and Mr Ryan Brown, We came to  the conclusion that the scheduling
model based on OSC time stamps (Host time stamps), renders inaccurate
results primary because of hardware interface fluctuations from the
nominal sample rate.

So far SuperCollider's model of sample accurate synthesis has been
based on a naive evolution from  nyquist: OSC assumption - sample
period should be at least half lower (in fact is - much lower) than a
resolution the protocol is able to handle.

However, and sadly for us mere mortals, hardware interfaces` sample
rates drift (1).

The OSC model expects to deal with a world where a sample rate of
44.1kHz takes exactly 44100 samples per sec.

Real life shows, unless you are chained to a $3k house sync word clock
or use a $1-2k interface, this wont happen (and then I smile at
subsample accuracy =) )

The end result is that OSC bundles will target calculated sample times
which are best guesses.

After a bit, the three of us thought that it may be better to use OSC
to serve as a sleeve to our purpose and schedule a packet on a sample
count delta - OSC's original implementation never thought of this.

The idea is simple and here is the proposal:

The Lang does not need to know an interface sample count but only how
many samples I want a bundle to be effective from 'now'.

We can use a hack of OSC's current implementation and the creation of
a primitive:

NetAddr:sendSampleBundle

...end use for the rest of us is:

s.sendSampleBundle(44100.0, bundle...

(notice the float - subsample accuracy for us to deal with next.)

Then, Server:sendBundle could convert a time stamp into a sample value
for the future:

       s.sendBundle(0.1, ... )
               becomes
       s.sendSampleBundle(4410.0,  ....)


Here are some facts about the current interface:

(
       k = 2.pow(32) - 1;
       SynthDef("counter", { arg request_bus=120, reset_bus=121;
               SendTrig.ar(In.ar(request_bus), 1010,
Phasor.ar(In.ar(reset_bus), 1, 0, k, 0) )
       }).send(s);

       SynthDef("ctrl", { arg bus=120;
               Line.kr(0,1,0.1,doneAction:2);
               OffsetOut.ar( bus, Impulse.ar(0) );
       }).send(s);
)
(
       r = OSCresponder(s.addr, '/tr', { arg t,r,m;
               l = l.add( m.at(3) );
               m.postln;
       }).add;
)

s.sendBundle(0.1, [9, "counter", 2000, 0, 1, "request_bus", 120]);
(
l = nil;
p = Routine({
       loop {
s.sendBundle(0.1, [9, "ctrl", 2001, 0, 1, "bus", 120] );
               1.wait;
       }
}).play(SystemClock);
)
p.stop

(l.size - 1).do({ arg n; (l[n+1] - l[n] ).postln; });

Server:actualSampleRate playing in:
(may need to run this several times till you get a good value for k)

k = (s.sampleRate / s.actualSampleRate);
(
l = nil;
p = Routine({
       loop {
s.sendBundle(0.1, [9, "ctrl", 2001, 0, 1, "bus", 120] );
               k.wait;
       }
}).play(SystemClock);
)
p.stop

(l.size - 1).do({ arg n; (l[n+1] - l[n] ).postln; });


The point is, we can sort of tell now (within bounds exposed above)
when a hardware interface rate doesnt fluctuate that bad and forecast
a sample position.

Yet this is not good enough when we want to schedule packets long in
the future or scheduling events based on an anchor obtained in the
past from a foreign source or yes, performing sample accurate
synthesis.

So it seems that what we really need is a sample based spawn model.

Mac's Built in interface seem to be quite solid but stressing the
processor's load makes things look odd (2).
linux implementation of Mac built in HW alsa drivers seems to be 'ok' and I want to see them under stress - the first 3-5 sample figures are
way off mark.

I can speak for what I have, an 001 - d!g!design with focusr!te
converters chained in: I get a rock solid timing under 10.2.8,  zero
drift - I use it under 10.4.10, a hack install on a 2 GB, 1.5 GHz 2MB
L3 cache G4 ( 001's were never meant to be around this long) and the
card timing SUCKS. Works kewl you would not notice it but still.

We would be more than glad to listen to other suggestions or ways to
possibly go about this.

Josh remembers that Sciss was having problems with sample accuracy
also... Sciss - did you figure out a way around this???

1 - s.actualSampleRate
2 - the hal time stamp generator does a grand job, even when overloads
but somehow it _still_ drifts a bit when pushed.

regards,

x
_______________________________________________
Sc-devel mailing list
Sc-devel@xxxxxxxxxxxxxxx
http://www.create.ucsb.edu/mailman/listinfo/sc-devel

_______________________________________________
Sc-devel mailing list
Sc-devel@xxxxxxxxxxxxxxx
http://www.create.ucsb.edu/mailman/listinfo/sc-devel