Album cover for Floating Spectrum

This is one of the most beautiful works in Processing I’ve seen. This is what it is made for. Art and electronics.
The only thing I would change are the 70 knobs. I would wear gyro/accelerator sensors on my arms, hands and fingers and stand in front of the screen as a conductor conducting his orchestra, composing at the same time.

2 Likes

Thank you @noel :slight_smile:

Wearing sensors to control the visuals is a good idea. I have thought about it, but it has some issues.

One is that I would be stealing focus from the visuals. Would people listen to the music? Look at the visuals? Or look at me performing? I don’t want to be the center of attention in this piece.

Second issue is more technical. I have thought of using two Vive controllers. In my current situation I control 70 parameters, that would be 35 parameters per hand. The controllers have 3 translation and 3 rotation measurements, which gives 6 values. Then you have the trigger controller under your index finger and an xy controller for the thumb. That totals to 9. Plus two push buttons. Maybe 11. I would be missing 24 controls per hand. And I think you probably must be super human to control precisely all those inputs simultaneously. I believe it’s one of those things that sound good in theory, but in practice it does not really work. I’ve seen lots of videos of people trying to do that kind of controlling for sound, and to me it did not make sense in most of the cases. It felt very gimmicky, with jerky motions trying to show a connection.

I do not use any display during the performance, only the same projection that everyone sees. The only feedback I have is the visuals and the midi controllers with their LEDs indicating the current values, in case I need to look at them.

I do plan to simplify and experiment with using only 16 push rotary encoders plus a computer keyboard to switch between 4 pages, giving me a total of 64 knobs and 64 buttons.

One thing I have thought about is having some kind of live coding interface instead. This would allow to give more complex instructions to the system: instead of setting the next value with a knob (which the system interpolates slowly towards that value to overcome the 7-bit midi resolution) using a live coding environment which would let me describe patterns, for example: blink 5 times in 6 seconds, then fade out. Or move right fast, then left slow. Or even connect inputs to outputs in real time, so certain sound analysis inputs can manifest different behaviors over the duration of the performance.

This kind of interaction is closer to how you play games like Star Craft, where you can give orders to your robot units to do things, which they carry on over the next seconds or minutes. The good thing about this is that you can express complex commands that overlap in time, producing more complexity than you are capable with 1:1 controllers and only two hands.

Though I find that adding a little bit of what you mention useful: I will not be on stage BUT I could have a tilt sensor on my head or pedals under my feet. Something to try :slight_smile:

2 Likes

After I wrote my last comment I went to bed, and in my dream I threw all electronics away, and bought a Kinect 2. With not only the square plane but also depth I had even more control. A lap, with a led display like screen, was in front of me for visualizing current values.
As I also suffer from shyness, I stood aside of the main screen in the darkness.That was a very nice dream, and I even think, that with practice it is doable.

2 Likes