[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Sc-devel] [RFC] sample accurate scheduling



Hi,

Let's definitely work on this. I want to try to hunker down on fixing the early termination issues in patterns, so I may need to wait a few weeks to really get on the case. So if you want to push on, please do and keep me updated. (I have not even looked
into how TempoClock is implemented yet.)


RJK


On Nov 29, 2007, at 5:58 PM, blackrain wrote:

Hi Ron,

The method you describe in the beginning of your email is 99% close to
what I am currently using. Only difference is that I mark the the
thread secs at the time of spawning the counter; take a first delta
when the first impulse sample arrives. this because of the app where
this system works on; I get absolute sample positions of events from
osc messages.

You are right, the method has been pretty accurate so far.

The blocking TempoClock idea sounds very interesting.
it would be good to experiment with it; see the results and system
overhead it may imply.
if that worked, sendSampleBundle is not needed at all!
I would be more than glad to help on testing and/or coding Ron.

The implementation I have been outlining is a draft and scheduling
packets in the mid and long future is still a problem under that
model. You are totally right.

Seems to me, what  I have described so far only gives us half of what
we really need. The real problem in scheduling packets to be sample
accurate, lays in the fact that our concept of time is still tied to
the SystemClock.

Scott's comment about JMC's talk in the Hague, made me think (I wasnt
there but dreamed a bit). so here is another possible approach...

Why dont we register the Lang as a coreaudio (jack for linux ppl) client?
thats all we need to get the interface sample count.
we can then implement SampleClock on that
here are some of the possibilities:

SampleClock.new(device, options)
SampleClock.free
SampleClock.seconds
SampleClock.samples
SampleClock.sched and schedAbs
etc...

Time for SampleClock will advance based on the hw interface sample
steps. ie, 1 second will be 44100 steps at a 44.1kHz sample rate.

we will be able to use SampleClock just like we use the SystemClock or
a TempoClock; the patterns framework, tasks, routines everything will
work the same except we can use sendSampleBundle (or a new version of
sendBundle that allows the choice of a scheduling clock for bundles)
and be sample accurate.
schedule packets all the time we want in the future it wont fail.


x

On Nov 29, 2007 6:46 AM, ronald kuivila <rkuivila@xxxxxxxxxxxx> wrote:
Hi all,

Here is a possible approach to long term synchronization:

Server sends an incremental samplesComputed number at some interval The OSCresponder subtracts that value from samplesProvidedByClock

Clock adds samplesComputedInInterval to samplesProvidedByClock and
blocks if that value exceeds maximumComputeAhead  If the server and
the language
are running on the same machine, you can almost certainly allow
maximumComputeAhead
to be a negative number corresponding to the maximum server latency
(i.e., the server will
be late with its reassurances everything is ok, and the system can
make an allowance for that.)

For this to work, the Clock runs a little fast relative to the
servers physical
sample rate (not at a higher sample rate, just thinks time is passing
a delta faster).
We don't need a fancy PLL, just setting the clock to prevent
starvation is enough.

Another question is the amount of clock jitter acceptable at the
language end.  This really
depends on what else is happening.  For graphics, 10 msec will be
fine, for parallel musical systems
("MIDI" by which I mean non-sample accurate performance on another
server) you probably want that tightened down to
1 msec.  If the sample clock variance is +-0.5%, then worst case is
1%, so 1 second updates would suffice
for graphics and 1/10 second for MIDI. (I guess we need to fold in
the maximum discrepancy between the language
clock and the sample clock to make this fully accurate, but this is a
reasonable ballpark figure.)

        So long term syynchronization boils down to a TempoClock that
blocks. This seems a litttle easy to me,
so it may be nonsense.  If so, tell me why!

RJK




On Nov 28, 2007, at 7:26 PM, blackrain wrote:

Since the lang does not have an idea of how much time it takes a
sample step for the synth, we give an offset in samples; the synth
will make the bundle effective based on that delta.

if we schedule a synth spawn for 1 sec in the future, scsynth will
execute the event in 1 second from the time of request but we have no
warranty that 1 second will be exactly 44100 samples from now (@
44.1kHz).

Scheduling a synth spawn 44100 samples from now will be 44100
samples from now.

we dont really need to know a sample count in the lang. latency will
allow us to play with delta samples times and always land exactly
where we want.

if there was the need, a PLL approach may work to obtain an interface
sample count but will probably result a bit expensive; depending of
course on the sync rate and the application.

in fact, we can still sort of calculate deltas in time (within bounds as the examples I posted) but schedule in samples and we will be sure
the result will be there.

what I meant about the one sample impulse was in order to sync other
apps that may be connected thru digital audio lines to the synth -
this applies to other hosts too.
in that case the coreaudio clock for both interfaces (the same digital
clock) will advance at the same rate.
for a setup like that, all we need is a one sample impulse from the
foreign app to tell where play was engaged for example.

#sbndle (thinking a bit - still 8 bytes) will not break the current
standard implementation.
the server will know it is receiving a bundle that states a delta
in samples.
we can use a Float64 to state deltas and be ale to handle subsample
accuracy instead of the usual 64 bit integer used to form time stamps.


x

On Nov 28, 2007 5:52 PM, Sciss <contact@xxxxxxxx> wrote:
maybe you mean a kind of PLL (phase-locked loop) or servo- controlled-
clock on the sclang side? so it queries in intervals the current
scsynth clock and adjusts itself accordingly, so that s.latency for
#sbundle's stay in a safe range?

Am 29.11.2007 um 00:43 schrieb Sciss:


sounds interesting, but (maybe it's too late in the night) i don't
yet get how the #sbundle approach is working.

Am 29.11.2007 um 00:32 schrieb blackrain:
[...]
All that is needed like Jan states, is a one sample impulse and a
responder to tell where the synth is.

_______________________________________________
Sc-devel mailing list
Sc-devel@xxxxxxxxxxxxxxx
http://www.create.ucsb.edu/mailman/listinfo/sc-devel

_______________________________________________
Sc-devel mailing list
Sc-devel@xxxxxxxxxxxxxxx
http://www.create.ucsb.edu/mailman/listinfo/sc-devel

_______________________________________________
Sc-devel mailing list
Sc-devel@xxxxxxxxxxxxxxx
http://www.create.ucsb.edu/mailman/listinfo/sc-devel


_______________________________________________
Sc-devel mailing list
Sc-devel@xxxxxxxxxxxxxxx
http://www.create.ucsb.edu/mailman/listinfo/sc-devel

_______________________________________________
Sc-devel mailing list
Sc-devel@xxxxxxxxxxxxxxx
http://www.create.ucsb.edu/mailman/listinfo/sc-devel