Use gpu to generate sound in processing (like in shadertoy)

hi to all…someone know if it’s possible to use the function mainSound() in a shader and use the vec2 output to generate an audio signal?

if you see the second tab of this link:
you can see the shader that use this function to generate the audio

i understand that this vec2 value is passed to the web sound api…
i want pass this value to the minim library, or other processing sound library.
is possible in your opinion?
thank you in advance


Why with a shader and not with the cpu?

because i need to manipulate audio with the fft.
the fft give me a matrix, like an image, so i just thinked that do matrix operation is more efficient with gpu in real time. sometime i’ve to do a lot of operation. for this work i don’t need precision, for that the cpu is surely a better idea, infact sometimes when i use shadertoy i can hear some noise because the float.

I see. Do you need float buffers for that? AFAIK frame buffers in Processing are byte based (so you only have a resolution of 256 levels per channel).

Answering your question, wouldn’t it be enough to output an image, and then you send the pixels array to the sound library? I haven’t found examples of a Processing sound library accepting your own data though. Maybe someone else knows.

with minim i can do a wavetable synthesis with table()
i want know how to access the vec2 return value of the shader to put it in the table

As I was trying to describe, you can output colors with the shader. So output pixels. Then you can look at the colors of the pixels and send them to minim. At least this is one approach, there’s probably others. So don’t output a vec2, but an image instead.

This is often done the opposite way: to do audio driven visuals, you take the FFT, encode the FFT values as pixel colors in a long thin image, then the shader reads the colors of the pixels (a texture) to produce visuals.

Are you sure the gpu is faster for what you need? The GPU is very fast a calculating lots of values in parallel. But when dealing with audio you often do not want lots of values in parallel because that would involve a large buffer which would introduce latency. With audio you want to be fast in a small loop (unless you are processing audio in parallel, that is, not in real time but applying effects to existing audio files). Also, the GPU is fast, but sending data to the GPU and downloading the result to the CPU is not fast.

But maybe you already tested this and you know it’s the right approach in your case?