I’m still very new to coding but I’m currently working with a 3D model that is motorized and reacts to sound, and I’m trying to figure out the best way to translate that digitally. I’ve done a code that “reveals” an image once the music reaches a certain volume - but that’s not quite the effect I’m looking for. Is it possible to create a code that plays an animation in response to real-time sound? Any help would be greatly appreciated, thank you.
If you don’t mind p5 instead of Processing, I recommend looking into the teachable machines integration! The interfaces for building the Machine Learning model are really simple and have a direct integration with p5. I’ve made a simple model that listens to the mic and can recognize whistling (categories being: whistling vs background noise) and changes the background color depending on the classification and the classification’s model’s confidence. If this interests you, maybe you could use it for inspiration and adapt it for your 3d situation. p5.js Web Editor