/*
I have a question towards a possible concept of combining Live
Cding of Sound with prepared GUI elements/sequencers.
I am not sure if and how it would for example be possible to write
a sound using JITlib that is then used by the GUI-sequencer
interface. Can a proxyspace exist simultaniously to a 'normally'
written sequencer program?
If yes, is it possible to make the sequencer send stuff to the
nodes written in jitlib?
Has anyone experience with this?
Any Help would be appreciated.
KArsten
*/
normally in sc you can execute any expression either by hand or by
a given function. So also in proxyspace you can set your parameters
from a gui, of course. You might want to read jitlib_efficiency
tutorial, just to know how to design your system efficiently.
you can always set controls, e.g.
~spmeProxy.setn(\freqs, [.......]) <- values from sliders
--
.
_______________________________________________
sc-users mailing list
sc-users@xxxxxxxxxxxxxxx
http://www.create.ucsb.edu/mailman/listinfo/sc-users