Hi everyone! I am just starting out with interactive art. I have seen a couple of projects made using processing and openframeworks which have an interactive cloth (wall) projected on a screen which the users can interact with. Some example might be:
I wasn’t able to find a lot of information on how to make it. Here is what I currently know:
- They use Kinect to track where the screen cloth is pushed in.
- They pass that particular “touch” point to processing and processing changes the physics of the simulation and simulates a press
Can you guide me towards how to make that physics simulated cloth design in processing? And perhaps how to go about incorporating the kinect input and cloth press?
Any pointers would be extremely appreciated
Bit of a tangent here, but another alternative to using the kinect is some form of conductive cloth (see this, for example) or a velostat, which can be placed in grids to identify the specific area (which is what the Kinect is being used for; to localize / identify the area of touch).
This can then be pushed to processing?
Also, in our experience, the Kinect tends to heat up on occasion, leading to a shut down. The Intel Real Sense cameras may be a more robust alternative, depending on the application, and how long you need the system to be running continuously…
A key early work in this tradition was Khronos Projector, which was pre-Kinect – it just used infrared LEDs and an infrared camera.
The Kinect or Real Sense route is much, much easier than that IR camera method, though. If you want a one-touch interface, just get a spandex screen, set up a Kinect camera behind the screen, and find the brightest (closest) point in the frame. That is your touch-point, done.
Here is a spandex screen:
I don’t have any experience with screens using conductive cloth / velostat, but one caution is that it sounds like (?) you are going to want rear projection for a touchable wall. You’ll need conductive cloth WITH good rear projection properties, or else that option is out.
Thank you so much for the response! It was super helpful. Any pointers to how I can move forward with the graphic generation? I personally think I need a 2D simulation engine like Toxiclib. Any pointers?
It totally depends on what you want to do! You have a depth map, which you could directly apply to entities (like lines, or circles) in the field. A lot of effects – like random vibration, or scaling, or color shifts – are just direct, and use the simple part of the core API – there are no “physics” to model, as the elastic sheet itself is physically modeling all of it for you, and you just need to show the outcome based on depth values.
If there are other processes you want to model – things like drifting around or bouncing, for example – could you say what those are?