Hi all,
I would like to use a certain output (or output pair) on the sound card.
Sound.list() reports that my device has 14 “max inputs” and 14 “max outputs” which is correct, but I see no way to access them. All sound emerges from outputs 1+2.
Is there any way to access the higher numbered outputs?
Hi @fastfourier. I’m fairly certain you cannot exceed 2 channels and the backend of the Processing sound library, Jsyn, by default grabs the first two usable channels of the default or chosen audio device. You would have to rewrite the code to get multi-channel support. So as far as I know, you can’t specify the output channels using Processing Sound.
Thanks for the info @robertesler. Yeah I had a feeling that was the case. I’m not sure this is even possible in pure Java - I haven’t found any examples that work.
The Java application REW does this, but it’s closed source and I read somewhere that they are using a non-Java backend to access the multiple inputs and outputs…
Well, I figured out how to output multichannel at least. You can get access to a “SourceDataLine” (output) which allows you to output (bitdepth/8) * channels bytes at a time. Here’s a sketch which outputs a sinewave. I am on a Mac and used the BlackHole virtual audio device for testing.
import javax.sound.sampled.*;
static final String MIXER_NAME = "BlackHole 16ch";
//static final String MIXER_NAME = "MOTU UltraLite mk3 Hybrid";
static final int BIT_DEPTH = 16;
static final int SAMPLE_RATE = 44100;
Mixer.Info[] mixerInfo;
Mixer mixer;
Line line;
AudioFormat af;
SourceDataLine sdl;
void setup()
{
size(200, 200);
mixerInfo = AudioSystem.getMixerInfo();
// find our interface by name
for (int i=0; i<mixerInfo.length; i++) {
if (mixerInfo[i].getName().equals(MIXER_NAME)) {
mixer = AudioSystem.getMixer(mixerInfo[i]);
print (String.format("Found mixer %s!\n",MIXER_NAME));
break;
}
}
if (mixer==null) {
print (String.format ("Can't find mixer %s, quitting\n", MIXER_NAME));
print ("Available mixers are:\n");
printArray(mixerInfo);
return;
}
// find the audio format we want
Line.Info[] lineInfo = mixer.getSourceLineInfo();
try {
for (Line.Info li : lineInfo) {
Line line = mixer.getLine (li);
if (line instanceof SourceDataLine) {
SourceDataLine source = (SourceDataLine) line;
DataLine.Info i = (DataLine.Info) source.getLineInfo();
for (AudioFormat format : i.getFormats()) {
if (format.getChannels()>2 && format.getSampleSizeInBits()==BIT_DEPTH && format.isBigEndian() && format.getEncoding()==AudioFormat.Encoding.PCM_SIGNED) {
print(String.format ("Found this format: %s\n", format));
af=new AudioFormat (44100, BIT_DEPTH, format.getChannels(), true, true);
}
}
}
}
}
catch (LineUnavailableException e) {
print ("line unavailable, exiting\n");
return;
}
if (af==null) {
print("Couldn't find the desired audio format\n");
return;
}
try {
// now we have found a multi-channel audio format in the mixer, open and start a SourceDataLine (output).
DataLine.Info sdlInfo = new DataLine.Info (SourceDataLine.class, af);
sdl = (SourceDataLine)mixer.getLine(sdlInfo);
println (String.format("got the line, %d bytes frame", af.getFrameSize()));
sdl.open();
sdl.start();
// generate some PCM data (a sine wave for simplicity)
byte[] buffer = new byte[128];
float step = (TWO_PI*2/buffer.length);
float angle = 0;
int i = buffer.length;
while (i > 0) {
float sine = sin(angle);
int sample = (int) Math.round(sine*32767);
buffer[--i] = (byte) sample;
buffer[--i] = (byte) (sample >> 8);
angle += step;
}
// write the sinewave 2000x to the buffer
for (int n=0; n<2000; ++n) {
writeToOutput(4, buffer); // write to channel 5 (channels start at 0)
}
// shut down audio
sdl.drain();
sdl.stop();
sdl.close();
}
catch (LineUnavailableException e) {
println (e);
}
}
void writeToOutput (int channel, byte[] buffer) {
// writes a buffer to aparticular channel of the sound card. (only will work with 16 bit atm)
// for example, an 8-channel soundcard will give you a 16 byte "frame size". To output to channel 5
// you have to insert your 16-bit audio data into bytes 8+9:
//
// 00 00 00 00 00 00 00 00 LS MS 00 00 00 00 00 00
//
// then write the whole buffer to the SourceDataLine we opened earlier.
// This writes it one frame at a time which is probably horribly inefficient.
byte[] b = new byte[af.getFrameSize()]; // make a buffer the same size as the frame
int lsb = (channel*2); // compute position of LSB
for (int i=0; i<buffer.length; i+=2) { // 2 bytes at a time
b[lsb] = buffer[i]; // copy in LSB
b[lsb+1] = buffer[i+1]; // copy in MSB
sdl.write(b, 0, b.length); // write to soundcard
}
}
// just for debugging
void dumpBuffer(int w, byte[] buf) {
for (int i=0; i<buf.length; i++) {
print (String.format ("%02X ", buf[i]));
if ((i+1)%w==0) println ();
}
}
Of course to use this you need the raw sample data. Not a huge deal for me because all I needed was to output a sine wave anyway, but I don’t think raw samples are available in the Sound library.
Taking a look at the jsyn source code I don’t think it would be too difficult to extend, but beyond my capabilities for sure. I’ll leave this here in the hope that someone can manage it!
Wish I had dug around in jsyn a bit more before writing all that bollocks lol. You can output to any channel with jsyn used directly in Processing (drop the release .jar onto the processing editor window):
import com.jsyn.*; // JSyn and Synthesizer classes
import com.jsyn.unitgen.*; // Unit generators like SineOscillator
static final String MIXER_NAME = "BlackHole 16ch";
//static final String MIXER_NAME = "MOTU UltraLite mk3 Hybrid";
Synthesizer synth = JSyn.createSynthesizer();
AudioDeviceManager am = AudioDeviceFactory.createAudioDeviceManager();
void setup() {
int deviceID=-1;
int deviceOutputs=0;
for (int i=0; i< am.getDeviceCount(); i++) {
if (am.getDeviceName(i).equals (MIXER_NAME)) {
deviceID=i;
deviceOutputs=am.getMaxOutputChannels(i);
print(String.format ("Found device %s with %d output channels\n", MIXER_NAME, deviceOutputs));
break;
}
}
if (deviceID==-1) {
print ("can't find the soundcard\n");
return;
}
ChannelOut myOutput;
SineOscillator myOsc;
synth.start (44100, -1, 0, deviceID, deviceOutputs); // init the whole shebang, no inputs, correct number of outputs
// add a ChannelOut object, allows you to choose the output channel via setChannelIndex method
synth.add (myOutput = new ChannelOut());
myOutput.setChannelIndex(5);
myOutput.start();
// add a sinewave oscillator, -6dBFS @ 1k
synth.add( myOsc = new SineOscillator(1000, 0.5));
myOsc.start();
// connect the oscillator to the output
myOsc.output.connect (0, myOutput.input, 0);
}
Hi @fastfourier. Yes, using Jsyn or Java sound directly is the best option for that kind of functionality in Processing or Java. Glad you figured that out. Thanks for posting your code. I’m sure that will be useful for others too!
I have also use Portaudio’s Java bindings which is pretty good with multiple channel support. Though it has some issues on Mac and not easy to build on Linux, but works great on Windows.
You pose an interesting problem for us library developers, we often just support stereo audio on the first available writable outputs simply because it’s easier to do quick. I haven’t explored a robust way yet to handle multi channel audio other than writing the code by hand for each specific instance, like what you did. I’m now a bit more inspired.
Thanks for the topic!
Hey no worries man. Hopefully it’s useful to others. I spent a good few hours googling for this and nothing directly useful came up, although I did get some pointers on how to wade through the Java sound stuff. Even the jsyn method is pretty buried, there are no specific examples of that either.
I can see how multichannel would complicate things from a simplicity standpoint, especially in the Processing world where things should be straightorward. The way jsyn does it is quite nice where you can create additional outputs on a multichannel device if you want to.
I guess stereo in/stereo out was all there was for a long time, the paradigm has stuck and still it’s good enough for 99% of cases. Although it’s a bit annoying when certain apps (zoom, skype et al.) insist on using the first two channels of your soundcard, and you have to resort to loopbacks and other nonsense to make your setup work.
Still, if you ned multichannel, then you need it. I read one post from someone who needed a bunch of different outputs in Java, and resorted to buying 7 of those cheap USB soundcards to get their rig working!
Hi. I am trying to run a simple non-linear sound system in Processing with a MOTU Mk5 audio interface. This is helpful info – thanks! I’m needing to play samples out of specific audio channels (6) – can I use samples?
I want to load up some samples and use some randomness to trigger what channel they play on for example. Any pointers really appreciated
It seems like you could create a SamplePlayer for each of your samples and a ChannelOutput for each of your outputs. Then write your own functions to connect and disconnect the SamplePlayer outputs to/from the MOTU outputs you want.
Hi – thanks for the response. Yes I was looking at that example – issue is I am a Processing person rather than a Java person! I always find it difficult to know how to instance stuff and address functions in Java from Processing. Can you share any simple Processing syntax to create a SamplePlayer, load a sample and call a function to play it? I tried various goofy hacks myself but couldn’t resolve it – just a basic Processing example – I can build upon it. If you have any time that is – if not no worries – I’l keep trying
I’m not a Java person either! I’m just good at copying and pasting. I chopped up the example posted earlier and this works in Processing 3.5.4. It will output the sample to Channel 6.
As for how to handle multiple outputs and samples, take a look at using ArrayList. You could create one ArrayList containing all the ChannelOut objects you need and refer to them by index.
I would suggest making a class to contain all the sample loading/playing stuff. In the class constructor you can pass in the Synthesizer object and the sample filename so the sample player can be added back to the synth.
Another ArrayList could hold all these sample playback classes. Have a look at examples → topics → advanced data → ArrayListClass to see an ArrayList of custom classes.
import com.jsyn.unitgen.*;
import java.io.IOException;
import java.net.MalformedURLException;
import java.net.URL;
Synthesizer synth = JSyn.createSynthesizer();
AudioDeviceManager am = AudioDeviceFactory.createAudioDeviceManager();
ChannelOut myOutput;
VariableRateDataReader samplePlayer;
static final String DEVICE_NAME = "MOTU UltraLite mk3 Hybrid";
void setup() {
// find the soundcard
int deviceID=-1;
int deviceOutputs=0;
for (int i=0; i< am.getDeviceCount(); i++) {
println (String.format ("Audio device %2d: %s", i, am.getDeviceName(i)));
if (am.getDeviceName(i).equals (DEVICE_NAME)) {
deviceID=i;
deviceOutputs=am.getMaxOutputChannels(i);
print(String.format ("Found device %s with %d output channels\n", DEVICE_NAME, deviceOutputs));
break;
}
}
if (deviceID==-1) {
print ("can't find the soundcard, exiting\n");
return;
}
// start the synth and add an output
synth.start (48000, -1, 0, deviceID, deviceOutputs);
synth.add(myOutput = new ChannelOut());
myOutput.setChannelIndex(5);
myOutput.start();
URL sampleFile;
try {
sampleFile = new URL("http://www.softsynth.com/samples/Clarinet.wav");
// sampleFile = new URL("http://www.softsynth.com/samples/NotHereNow22K.wav");
}
catch (MalformedURLException e2) {
e2.printStackTrace();
return;
}
FloatSample sample;
try {
// Load the sample and display its properties.
SampleLoader.setJavaSoundPreferred(false);
sample = SampleLoader.loadFloatSample(sampleFile);
System.out.println("Sample has: channels = " + sample.getChannelsPerFrame());
System.out.println(" frames = " + sample.getNumFrames());
System.out.println(" rate = " + sample.getFrameRate());
System.out.println(" loopStart = " + sample.getSustainBegin());
System.out.println(" loopEnd = " + sample.getSustainEnd());
// connect the sample player to the ChannelOut we created earlier
if (sample.getChannelsPerFrame() == 1) {
synth.add(samplePlayer = new VariableRateMonoReader());
samplePlayer.output.connect(0, myOutput.input, 0);
} else if (sample.getChannelsPerFrame() == 2) {
synth.add(samplePlayer = new VariableRateStereoReader());
samplePlayer.output.connect(0, myOutput.input, 0);
samplePlayer.output.connect(1, myOutput.input, 1);
} else {
throw new RuntimeException("Can only play mono or stereo samples.");
}
samplePlayer.rate.set(sample.getFrameRate());
// We can simply queue the entire file.
// Or if it has a loop we can play the loop for a while.
if (sample.getSustainBegin() < 0) {
System.out.println("queue the sample");
samplePlayer.dataQueue.queue(sample);
} else {
System.out.println("queueOn the sample for a short time");
samplePlayer.dataQueue.queueOn(sample);
synth.sleepFor(8.0);
System.out.println("queueOff the sample");
samplePlayer.dataQueue.queueOff(sample);
}
// Wait until the sample has finished playing.
do {
synth.sleepFor(1.0);
} while (samplePlayer.dataQueue.hasMore());
synth.sleepFor(0.5);
}
catch (IOException e1) {
e1.printStackTrace();
}
catch (InterruptedException e) {
e.printStackTrace();
}
// Stop everything.
synth.stop();
}
void draw() {
}
I’m no expert but I’d think you do not want to be using those synth.sleepFor calls for your eventual class. I would suggest having a play method where you pass in one of your ChannelOut objects. The class will connect the samplePlayer to it and start playing using samplePlayer.dataQueue.queue(sample)
Then have a check method that you can call in draw(). In there you can see if the sample has stopped playing using if (!samplePlayer.dataQueue.hasMore()) and if it has, you can disconnect the ChannelOut and await another play command.
This is really helpful – thanks! Yes – was hoping to roll this into a multi-purpose class for my students to be able to use without worrying about details. I’ll do my best to get this going and report back – thanks again
I’d recommend anyone who wants multi-channel audio to use Linux too. This is down to how the underlying OS devices work under JavaSound. On Linux you get one device with multiple channels, on other OS you get multiple stereo devices.
Does it? That’s new. Used to be stereo only on mac. It’s down to how the underlying operating system library represent channels. Windows represents multichannel cards as multiple stereo devices, at least with the API JavaSound uses / used. JavaSound just reflects whatever the OS gives it. Although it’s been a while since I looked at all this - have a bunch of libraries at JAudioLibs · GitHub
Core Audio on Mac has had multichannel devices for ages - not sure if it was in the very first OS X release, but definitely not long after that. Multichannel in Java Sound works the same way as it does on Linux as far as I can see.
It’s stereo only for the standard MME or WDM audio on Windows. The multiple stereo devices thing I guess depends on a driver. I have seen some devices that do it, some that don’t. For “proper” multichannel, portaudio/JACK use ASIO or WASAPI.
Yes, but JavaSound didn’t provide access to them. I think that might have changed some time ago though. Not had a chance to look at this on macOS or Windows for ages. Something to put on the to-do list!
As you mentioned it, not sure how well JACK is working on Windows and macOS either these days? Was a good approach for solid, low latency, multichannel audio with Java.