I’m trying to create a Processing application where a user can:
- Go to a web address in a local network
- Website gets the user’s phone orientation and sends that data to a Processing sketch located on a Raspberry Pi (which is also acting as the website’s server)
- The Processing sketch on the Pi interprets the orientation data into sound using the Beads library
- The audio from beads is then shot back up to the website so the use can hear it through their phone’s audio jack
I have everything just about solved aside from sending the data down to Processing and then sending the audio back to the website. From my research, this all seems feasible and I’m planning on using OSC for the data/audio sending but am new to it. Does this sound like it’ll work and anyone have any good starting places for the data/audio sending?