Visor - New tool for using Processing in live performance

Sonic Pi is fantastic in that you can condense musical actions into simple lines of code, e.g: play :e2 or sample :amen. It seems it is harder with video as there are more dimensions at play, making it harder to express simply. We could focus more on samples (like music) with video clips (like VJs) but then we lose the flexibility of generative graphics. I’m interested into how we can build environments to allow us to work at a high level (VJing) and at a low level (creative coding generative graphics).

I could be wrong but I think Joshua uses pre-written sketches where parameters are mapped to MIDI controllers to allow for interaction during performance. Visor offers features to more easily create these kinds of mappings, even during a live performance.

The concept you defined here sounds interesting and I would like to read more. I also agree that it is useful to have the code fully represent the state of the system. I couldn’t think of a straight-forward way to do this with Processing in Ruby and also achieve the amount of liveness I was looking for, so I opted to use reflective techniques to visualise the state of the system. For example, the state GUI shows you all of the instance variables that are currently defined on the layer, even if you deleted the code to create a variable. What Visor lacks currently is a way of presenting what methods or classes are currently defined. So there is definitely a disconnect between the code and the system state, but it allows you to work more imperatively with the REPL-like code editors.

How does PraxisLIVE go about this problem? If I remember clearly, you cleanly separate your data from your sketch classes so the sketch code can be completely hot-swapped without affecting the state of the data?