[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [sc-users] Raspberry Pi 3 - server crashes above 65% CPU



Hi Eric, it's a good question and someone better qualified might be able answer, but if I run the following lines (on this 8-core machine) in a single sclang client:

n = NetAddr("127.0.0.1", 57200);
~s1 = Server.remote("serv1", n );
~s1.boot;
n = NetAddr("127.0.0.1", 57201);
~s2 = Server.remote("serv2", n );
~s2.boot;
n = NetAddr("127.0.0.1", 57202);
~s3 = Server.remote("serv3", n );
~s3.boot;
n = NetAddr("127.0.0.1", 57203);
~s4 = Server.remote("serv4", n );
~s4.boot;

and then run: ps -ef | grep scsynth in a Linux terminal to get the 4 individual PIDs, running "taskset -cp PID" on each one gives:

current affinity list: 0-7

So it's a range and not specific core for each PID and perhaps there's no telling where each core is running. Whereas by running each server using the taskset utility to set the specific core number, "taskset -cp PID" returns a specific and unique core number (as specified in the script) for each server.

Cheers,

Iain

Em 12/01/2018 13:22, jymminbusch@xxxxxxxxx escreveu:
Hi Iain,

I’m curious:
Why don't you start the four servers from one sclang client?
Will they be created on the same core then?

Best,
Eric



Am 12.01.2018 um 14:39 schrieb mott@xxxxxxxxxx:

Found a solution which is to increase the maxLogins option of each server to 2. It seems that on the pi, when sclang is killed off after the creation of each server, the client remains attached. On the desktop Ubuntu machine it doesn't.

I'm using the following to kill sclang after each server is booted:

s.waitForBoot {
    "killall -9 sclang".unixCmd
}

I tried using 0.exit but that had the unwanted effect of also killing the scsynth.

Is there a better way?

I'm happy at least that it's working and I can run synths on the 4 cores.

All the best,

Iain


Em 12/01/2018 10:42, mott@xxxxxxxxxx escreveu:

adding to my last message, the curious thing is that if I do exactly the same thing on my desktop computer, it works. Any reason why it doesn't work on Pi?

I also tried using only 2 servers on the pi (on cores 0 and 1 and again on cores 0 and 2) but the same error results: "could not register, too many users".

Any ideas?

Thanks


___________________

Many thanks sludgefree. Managed to get servers running on the 4 cores of the Pi3. I've pasted my method for this at the end of this message. The only problems is that I don't know how to access the servers once they're all created. For example, the following command in a terminal shows that the 4 servers are running (and I've tested them with taskset and they are associated with each of the 4 cores on the Pi3):

pi@raspberrypi:~/scripts $ ps -ef | grep scsynth
pi        2431     1  1 11:41 pts/0    00:00:08 scsynth -u 57200 -a 1024 -i 2 -o 2 -b 1026 -H SC Server 1 -R 0 -C 0 -l 1
pi        2510     1  1 11:41 pts/0    00:00:08 scsynth -u 57201 -a 1024 -i 2 -o 2 -b 1026 -H SC Server 2 -R 0 -C 0 -l 1
pi        2585     1  1 11:41 pts/0    00:00:07 scsynth -u 57202 -a 1024 -i 2 -o 2 -b 1026 -H SC Server 3 -R 0 -C 0 -l 1
pi        2659     1  1 11:41 pts/0    00:00:07 scsynth -u 57203 -a 1024 -i 2 -o 2 -b 1026 -H SC Server 4 -R 0 -C 0 -l 1
pi        2860   715  0 11:51 pts/0    00:00:00 grep --color=auto scsynth

The first Server was created with NetAddr("127.0.0.1", 57200)  - see below

Running sclang again, I thought then that I might be able to create an instance of the server with the following:

n = NetAddr("127.0.0.1", 57200);
s = Server.remote("s1", n )

This however results in the following:

s1 : setting clientID to 0.
-> s1
Requested notification messages from server 's1'
s1 - could not register, too many users.

It also kills off Server 1 running on core 1.

Any suggestions please on what I can do to access the servers created on cores 1-4?

I'll paste below the technique I used to create the 4 servers.

All the best,

Iain


Created 4 .scd files.

Contents of launchserver1.scd:

n = NetAddr("127.0.0.1", 57200);
s = Server.new("s1", n);
o = s.options;
o.device = "SC Server 1";
s.waitForBoot {
    "killall -9 sclang".unixCmd
}

Contents of launchserver2.scd:

n = NetAddr("127.0.0.1", 57201);
s = Server.new("s2", n);
o = s.options;
o.device = "SC Server 2";
s.waitForBoot {
    "killall -9 sclang".unixCmd
}

Contents of launchserver3.scd:

n = NetAddr("127.0.0.1", 57202);
s = Server.new("s3", n);
o = s.options;
o.device = "SC Server 3";
s.waitForBoot {
    "killall -9 sclang".unixCmd
}

Contents of launchserver4.scd:

n = NetAddr("127.0.0.1", 57203);
s = Server.new("s4", n);
o = s.options;
o.device = "SC Server 4";
s.waitForBoot {
    "killall -9 sclang".unixCmd
}

......

Then to launch everything, a bash script with the following:

#!/bin/bash
killall -9 sclang scsynth
/usr/bin/taskset -c 0 sclang /home/pi/scripts/launchserver1.scd
/usr/bin/taskset -c 1 sclang /home/pi/scripts/launchserver2.scd
/usr/bin/taskset -c 2 sclang /home/pi/scripts/launchserver3.scd
/usr/bin/taskset -c 3 sclang /home/pi/scripts/launchserver4.scd
exit 0




Em 11/01/2018 17:42, sludgefree@xxxxxxxxx escreveu:
This is all experimental, so please forgive the lack of specifics:

I wrote a script that would kick off the first server, which would run a bunch of nodes, and then exit the script with sclang running in the background.

I had a second script that I would run after that for the second server. Linux automatically started the second server on its own core.

I am also not sure what's up with only needing two servers; it's possible that my "quad" core is really more of a physical dual core, with some machine virtualization on top of it :)

If I were using this method on a regular basis, I would probably figure out a way to merge the scripts, but most of what i do just doesn't require that much power.

For your purposes, you may want to look into pinning a process to a given core: https://baiweiblog.wordpress.com/2017/11/02/how-to-set-processor-affinity-in-linux-using-taskset/


On Thu, Jan 11, 2018 at 12:42 PM, <mott@xxxxxxxxxx> wrote:

Thanks. I'd heard about using more than one server but wasn't sure how to go about doing it. By creating two servers, will these automatically run on two different cores or do you need to specify a core for each server (couldn't see how that would be done). I didn't understand the idea of two virtual cores on a 4 core machine. Is it not possible to run 4 servers, one on each core?

Cheers,

Iain


Em 11/01/2018 15:02, sludgefree@xxxxxxxxx escreveu:
Not EXACTLY what you're looking for, but after failing to get Supernova working, I've used all 4 cores on my linux laptop before without Supernova by running 2 servers. 

I have a quad-core chip, so two of the cores are "virtual" - I'm not clear on what that means, but I am able to hit 100% cpu usage with just 2 servers. Experimentally, I don't go much higher than 70-80% overall usage, as it tends to cause audio glitches if I go any higher.

Depending on your setup, maybe you could do something similar. You could send each new note to an alternating server, or you could run different "instruments" on two servers. You could have one server's output passed to another via jackd. You can receive messages in one sclang which could figure out the load balancing to your heart's content.

What you CAN'T do is share buffers or transparently re-order nodes between servers.

Hope someone figures out Supernova, or this helps in the meantime! <3


On Thu, Jan 11, 2018 at 7:03 AM, <mott@xxxxxxxxxx> wrote:
Thanks a lot Frederik. I should have thought of trying the built-in audio. With the built-in I can run the program at up to 100% CPU with no crash, just lots of dropouts, so it must be a hardware problem associated with the USB card and I'll try and isolate the exact issue. I wasn't aware of the GPU adjustment either so I set the GPU to 2Mb.

Would still like to make use of all 4 cores. If anyone knows about running Supernova on the Pi, please respond.

All the best!

Iain



Em 10/01/2018 21:41, f@xxxxxxxxxxxxxxxxxxx escreveu:
hi,
it sounds a bit strange that the server crashes at around 65%.  in my experience one can go higher and when overload you normally start hearing dropouts and crackles (xruns). the server can even take that for a while and recover if the load decreases.  seldom does the server totally crash.  i think sclang and jack more often locks up first.

i'd try to figure this out by writing a synthdef that eats a lot of cpu (a static one - not spawning many new synths from sclang).  leave it running for a while and then add another one.  how high can you go in server cpu?
do you get roughly the same performance when using the pi's internal audio vs the external usb card?

if you can push you test synthdef much higher and up toward 100% with both audio cards then it's more likely a problem with your sc code.  are you passing a lot of data between sclang and scsynth?  network clogged up?
or doing something that eats up ram?  (did you minimise gpu memory to give a bit more ram to the cpu under raspi-config -> advanced?)

overheating can be ruled out by temporarily adding a spare computer fan blowing over the board and see if it takes longer time until crash.  heatsinks might help but i doubt this a heat issue if it is only sc crashing.  also try with another power supply and other usb cables.  bad usb cables / supplies can cause a voltage drop and then the system might crash/glitch or power to external usb devices drop when the pi itself is drawing a lot of current.

sorry, don't know if supernova could help or if it is at all possible to install on a pi.
good luck,
_f

10 jan. 2018 kl. 22:10 skrev mott@xxxxxxxxxx:

Hello list,

I'm doing some experiments running a project on a Raspberry Pi 3, for which I built SC from source using the instructions here: http://supercollider.github.io/development/building-raspberrypi

It's running OK, however once the CPU rises above 65% capacity, the server crashes and scsynth needs to be killed off before the code can be run again. There are usually no error messages. Sometimes the following is printed before the crash:

JackEngine::XRun: client = SuperCollider was not finished, state = Running
JackAudioDriver::ProcessGraphAsyncMaster: Process error

And after the crash, the command: s.avgCPU  yields a static result, so I assume that it is the server that has crashed.

Firstly, is this to be expected?

The CPU is set to "performance" mode and I'm running the code from the command line, without scide/emacs. The jack set up is:

/usr/local/bin/jackd -P75 -dalsa -dhw:1 -r44100 -p1024 -n3

This has as large a latency as I'd wish to use on this particular project. Sometimes I can run a program for hours at -p1024, other times, for no apparent reason I need to put 2048 to run something a bit heavy. Temperature of the board?

At the moment I'm using a Behringer UCA222 USB interface. Without a powered hub the server crashed at around 35 or 45%. I have an stereo "injector sound card" on order. This sits right on the board and doesn't require additional power. It capable of very low latencies apparently and I hope it will help.

The other question I wanted to ask is, might supernova be solution? It didn't get built with the instructions cited above, so I don't know if it can be run on a Raspi 3. Using the "nmon" utility, I see that SC is using only one of the 4 cores.

I'd be grateful for any suggestions you can send.

All the best!

Iain

--
_________
Iain Mott
http://escuta.org


_______________________________________________
sc-users mailing list

info (subscription, etc.): http://www.birmingham.ac.uk/facilities/ea-studios/research/supercollider/mailinglist.aspx
archive: https://listarc.bham.ac.uk/marchives/sc-users/
search: https://listarc.bham.ac.uk/lists/sc-users/search/

   #|
      fredrikolofsson.com     musicalfieldsforever.com
   |#


_______________________________________________
sc-users mailing list

info (subscription, etc.): http://www.birmingham.ac.uk/facilities/ea-studios/research/supercollider/mailinglist.aspx
archive: https://listarc.bham.ac.uk/marchives/sc-users/
search: https://listarc.bham.ac.uk/lists/sc-users/search/

--
_________
Iain Mott
http://escuta.org


_______________________________________________
sc-users mailing list

info (subscription, etc.): http://www.birmingham.ac.uk/facilities/ea-studios/research/supercollider/mailinglist.aspx
archive: https://listarc.bham.ac.uk/marchives/sc-users/
search: https://listarc.bham.ac.uk/lists/sc-users/search/


-- 
_________
Iain Mott
http://escuta.org


-- 
_________
Iain Mott
http://escuta.org

-- 
_________
Iain Mott
http://escuta.org


-- 
_________
Iain Mott
http://escuta.org