High resolution video mapping

Hello everyone !

I’m new to Digital Arts and I would like to know if Processing is capable to handle something like 4 live video streams (2K to 4K) in 60 fps when manipulating them along with other high resolution videos already on the computer in order to map the whole thing throughout multiple projectors and even recording the output in a realtime theater performance context including kinekt / video analysis ?

If not, are these limitations linked to the computer or by the fact Processing is based on JAVA; should I use OpenFrameworks instead ?

Thank you in advance and sorry my poor english !

Best regards,
C. B.

1 Like

hi! welcome to the forum!

I think it really depends on what you want to do and what equipment you have. Where are the videos coming from - are they attached to the computer with USB capture, ethernet or over network? are you using external software like madmapper for video mapping, or are you planning to build your own? how complex is the mapping, is it just deforming on a quad plane or do you project on sagrada familia?

And what is the purpose of kinect? Are you using it to track bodies / movements to affect the visuals? Or is it just for documentation?

I wouldn’t immediately say that Java (processing) is slow and C++ (openFramework) is fast. Nevertheless in this case you need to do some low level programming (most likely with shader) for a better performance, and depending on what library you use, openFrameworks may be easier as there is more variety of user-contributed libraries. In that sense, maybe commercial tools like TouchDesigner have better support. Realistically, in the end I would make a compromise on the resolution, framerate and/or something else based on the time frame and budget of the project.

1 Like

Thank you Micuat for taking the time to answer me.

At the moment the project is not fully defined : in particular how I plan to connect the GoPro and the Canon 7D to the computer… Anyway !
I took the liberty of asking my question after having read in several places that Processing would be less efficient than oF but since I am just starting out …

I would like to focus on a (open source) platform that will not present limits regarding videos fluidity in this kind of large projects. Although everything will be smaller on the first piece (1 projector and 2 cameras), I prefer to learn directly the platform which will be reliable for larger projects.

Either way, I need:

  • to capture the image in good quality (min. HD) and 60 fps to be able to achieve slow motions.
  • keep video samples in memory and trigger them at another time by applying all kinds of effects.
  • generate a mapping with the same system: different quad planes spreaded between 1 or several projectors should be fine.
  • be able to move the elements captured or shot in advance from one square to another (different screen surfaces on stage)
  • being able to cut out an actor on stage, extract him and follow him in real time: I understood that I had to use kinekt but I was wondering if we could not (just) analyze the incoming video, movements, colors?

Thank you for your feedback. I apologize for these rookie questions ! All of this fascinates me and even if I’ll delegate I would very much like to know more about that staff.
My field is music, but programming sounds and images is something that inspires me a lot!

If Processing were able to fulfill my wishes while remaining fluid, it would be a great thing; on the other hand learning oF seems to be a great challenge.

Looking forward to reading you !


P. S. Sorry for the translation.

1 Like

If you are totally new to programming, then I don’t want to let you down but you have to keep expectations low. Features that commercial software have may be extremely challenging to implement in Processing (also in oF). On the other hand, the beauty of programming is that you can achieve what commercial software doesn’t with a few lines of code.

I recommend you to take a look at examples from “video” library of processing to see how it’s like, and you might get what I meant above.

Interestingly, a year ago I did a similar project but only with quad mapping and video buffer. Storing video buffer on processing was very complicated. Later I figured that PGraphics somehow stores data not only on GPU but also on RAM so that after ~ 1 min of recording it starts to swap out, so I had to combine it with oF to avoid RAM caching - and this is the kind of thing you are going to do :slight_smile:

1 Like

This is what I did with processing and oF, and the video part starts in the second half

I just made the repository public as there’s nothing to hide :slight_smile: but I use my own framework so I don’t think the code is that useful…



Thank you for sharing this and for your precious tips!

The processing examples I’ve found dont use High Res. / multiple cameras / long sequences and the only solution for me were to learn Processing in order to check if the sketch will work or not: big lack of time … Although Processing is a very elegant solution due to its portability and immediacy “run” : beautiful.

In the end, what is the Processing limit in terms of number of cameras / projectors / fps / resolution?
If we don’t know, I understand the switch to oF as an additional guarantee regarding fluidity etc.
So I don’t seem to have a choice (despite Opengl’s Processing acceleration?) to do it all in oF, isn’t it ?
I was also wondering why you hadn’t realized your project (very good performance / realization!) fully in oF if Processing had its limits ? I think I missed something…

Maybe are there workarounds to make Processing competitive with oF as part of my project: compile / OpenGL acceleration / Processing 4 improvements / codecs or something?

Last but not least, do you think Processing and / or oF would be there in a few decades because of their open source aspect and the low-level beauty of oF or do you see one of the two declining over the years ? In other words will oF would be a good time investissment on the long term perspective if Processing is not enough ?

Again, thank you for your time. Have a good day !
All the Best,

out of curiosity I tested 2 cameras (logitech hd webcam and hdmi-usb capture) at 1920x1080 each and it runs at 60 fps (although the capture is at 30) on my Ubuntu with i7/RTX2070:

 * Getting Started with Capture.
 * Reading and displaying an image from an attached Capture device. 

import processing.video.*;

Capture cam0;
Capture cam1;

void setup() {
  size(1920, 2160);

  String[] cameras = Capture.list();

  if (cameras == null) {
    println("Failed to retrieve the list of available cameras, abort");
  } else if (cameras.length == 0) {
    println("There are no cameras available for capture.");
  } else {
    println("Available cameras:");

    // The camera can be initialized directly using an element
    // from the array returned by list():
    //cam = new Capture(this, cameras[0]);
    // Or, the settings can be defined based on the text in the list
    cam0 = new Capture(this, 1920, 1080, "HD Pro Webcam C920", 30);
    cam1 = new Capture(this, 1920, 1080, "USB Capture HDMI: USB Capture H", 30);
    // Start capturing the images from the camera
  println("cam0: " + cam0.width + " " + cam0.height);
  println("cam1: " + cam1.width + " " + cam1.height);

void draw() {
  if (cam0.available() == true) {
  if (cam1.available() == true) {
  image(cam0, 0, 0);
  image(cam1, 0, 1080);
  text(frameRate, 50, 50);
  // The following does the same as the above image() line, but 
  // is faster when just drawing the image without any additional 
  // resizing, transformations, or tint.
  //set(0, 0, cam);

Again, I warn you if you simply believe Processing is for beginners and oF for professionals. Perhaps people say so because of Java vs C++ (which I don’t think there is a significant performance gain) but also they are biased because of the nature of each community. I suggest simply trying oF to see if it suits you. It seems there is an addon for GPU accelerated decoder, that can be useful for playback. But if you feel working with addons and low level (this case with hardware) programming are too complicated and time consuming, I would suggest starting with more fun stuff either with processing or oF (eg kinect interaction) or switch to commercial software like touchdesigner or vvvv because there are more people from industry working with similar requirements as you mentioned and they may be able to help you.

1 Like

Note that GStreamer (as in Video library) can use GPU decoders. The problem is in lack of OpenGL context sharing in our (gst1-java-core) Java bindings. PraxisLIVE has better performing integration between GStreamer and Processing. It still doesn’t yet have GPU → GPU transfer of video textures. It’s on my radar for gst1-java-core but need to find time to work on it.


cool!! I know MaxD uses PraxisLIVE and his mapping software to do things similar to what OP is asking for. But again, he’s super knowledgeable and his workflow is not something I recommend as an introduction :slight_smile:

1 Like

Well, MaxD is one user I sometimes have to ask how he’s achieved something! :smiley: Still, we have some pre-existing nodes (mini-sketches) that do some of what the OP wants to do.

1 Like