Send OSC Audio to Website

Hi Everyone,
I’m trying to create a Processing application where a user can:

  1. Go to a web address in a local network
  2. Website gets the user’s phone orientation and sends that data to a Processing sketch located on a Raspberry Pi (which is also acting as the website’s server)
  3. The Processing sketch on the Pi interprets the orientation data into sound using the Beads library
  4. The audio from beads is then shot back up to the website so the use can hear it through their phone’s audio jack

I have everything just about solved aside from sending the data down to Processing and then sending the audio back to the website. From my research, this all seems feasible and I’m planning on using OSC for the data/audio sending but am new to it. Does this sound like it’ll work and anyone have any good starting places for the data/audio sending?


So this is actually Android related?
You will make an app with a webview and use the ketai lib for sensor output?
I just don’t understand why you are using the Pi as an intermediate.

Thanks for the reply! I hadn’t really considered running it as an android app because I want it to work on iOS phones too. There are also going to be a lot of objects/audio files, so I’ve been worried that running it would be too heavy for a phone. The pi acts as a beacon that disperses audio to multiple phones in a local network. I now have item 2 above solved, so it’s really a matter of sending the audio from the pi to the website now, but after some research, it looks like OSC can only do data and not audio signals.