Alternating Bands of two images

I am trying to create two alternating bands of two live video streams onto a display window. 10 strips from a live stream and 10 from a delayed stream. The first 10 placed on odd strips and the second on the even strips. Is this possible?

1 Like

Not with Processing alone, but for the sake of argument let’s assume you already have another library which can feed you pixel information from the live stream.

The first step would be to convert that stream of images into PImages, which would most likely require you to go through each pixel and edit the pixels array of the PImages to have those pixels.

The next step would be to, once you have the ability to convert images to PImages, store enough PImages to go back the amount of frames you want your delay to be.

Now, the final step: Rendering. Let’s call the current frame’s PImage p1, and the delayed one p2. You’ll want to go through every pixel of the screen and determine whether or not it should be equivalent to p1’s pixel at that location, or p2’s. To do this, I recommend dividing the X (or Y, depending on the direction of the strips) position by the width of each strip, then modding it by 2. If you end up with 0, use p1’s pixel, otherwise use p2.s pixel.

The only real issue with this solution is the processing power required from the CPU to accomplish it real-time. This task would normally be suited for a GPU, but the whole point of Processing is to enable the usage of graphics without programming the GPU. Unless the stream is very low resolution, you will get a lot of lag, but yes, it’s certainly possible.


It depends what you mean by “live video streams.”

Break the problem into pieces that can be solved separately:

  1. sources: get live video streams from two sources
  2. strips: combine two images (the current frames from your two sources) in strips.
  3. speed: make striping as efficient as possible so that we can do it very fast (high frame rate).
  4. delay:

1. sources

If you mean local cameras, you can do this with Processing Video, using the Capture object for each source.

If you mean IP cameras / MJPEG streams, use the IPCapture library:

2. strips

Here is a simple example using the Processing built-in get() and a public image.

 * ImageStrips
 * Move the mouse to change the image strip count.
 * Demo shows two different sources cut from a public composite
 * 2019-04 - Jeremy Douglass - Processing 3.4
PImage img, img2;
void setup(){
  PImage src = loadImage("");
  // chop out the two images we want to compare from our collage source
  img =  src.get(10,10,src.width/2-20,src.height/2-20);
  img2 = src.get(10,src.height/2+10,src.width/2-20,src.height/2-20);

void draw(){
  int stripCount = (int)map(mouseX, 0, width, 2, 10);  // 2-10 strips
  getStrips(img, stripCount, true);
  getStrips(img2, stripCount, false);

void getStrips(PImage img, int stripCount, boolean odds){
  int stripWidth = img.width / stripCount;
  int cursor = 0;
  while(cursor < img.width-1){
      image(img.get(cursor, 0, stripWidth, img.height), cursor, 0);
    cursor += stripWidth;
    odds = !odds;


3. speed

@TechEpic gave you some good tips on doing this in one pass per image with a loop over pixels. You should test whether that is measurably faster. You could also try copying all your p1 pixels, then all your p2 pixels, rather than interleaving the read/writes in the loop.

4. delay

If this is HD video and you need a long delay time then I’d recommend a dedicated external hardware solution. If the delay is long enough, you can also use a DVR approach – record the stream to disk (possibly on another system), then restream it to your sketch. In either case, as far as your sketch is concerned, it is mixing two live sources. If it is a just a short delay then you can keep a frame buffer in memory (as described in a prev. answer), but this requires RAM and could be a performance hit.

1 Like