Video Pixels Changing While I Process Them

I’m using the Video 2.0 library with Processing 3.5.4 on Windows 10. I start capturing from my laptop’s camera, and just do an xor against 0xFFFFFF for every pixel. The output should look like this:

But every so often, it looks like this:

It seems to last for a single video frame, then goes away until the next time it happens. Seems to happen at random times, about 15 to 20 times every minute. The region of the picture that is affected changes each time, but it always looks more or less like that picture above (yes, it took me quite a while before I got lucky and grabbed a frame with the problem in it :slight_smile: ).

My code is below. Can anyone tell me what is causing this?

import processing.video.*;

Capture video;

void setup()
{
  size(1280, 720);
  
  video = new Capture(this, 1280, 720, "pipeline: autovideosrc");
  
  video.start();
}

void draw()
{
  if (video.available())
  {
    video.read();
    video.loadPixels();

    for (int i = 0; i < video.pixels.length; ++i)
    {
      video.pixels[i] = video.pixels[i] ^ 0xFFFFFF;
    }
    
    video.updatePixels();
  
    image(video, 0, 0);
  }
}
1 Like

I suspect there is a race condition between you inverting the pixels and the camera writing the next frame’s worth of pixels into the camera’s array of pixels. You are flipping them in order, but it’s writing unflipped ones for the next frame in at the same time.

I suggest you do not flip the pixels in the video’s pixel array - instead, draw the video’s image to the screen, and then flip the screen’s pixels:

import processing.video.*;

Capture video;

void setup()
{
  size(1280, 720);
  
  video = new Capture(this, 1280, 720, "pipeline: autovideosrc");
  
  video.start();
}

void draw()
{
  if (video.available())
  {
    video.read();
    image(video, 0, 0);

    loadPixels();

    for (int i = 0; i < pixels.length; i++)
    {
      pixels[i] = pixels[i] ^ 0xFFFFFF;
    }
    updatePixels();
  }
}

Untested code - YMMV!

3 Likes

You may also consider just using

filter(INVERT);

That would explain it, though I would have thought video.read() captured a fixed set of values. Now, interestingly, if you comment out the video.loadPixels() and video.updatePixels() calls, nothing changes. That is, the program still runs the same. The video.pixels array is constantly updated. So, you may be right and something about how the camera accesses and modifies that array is unsynched with my own changes. Making a defensive copy, to the screen or some other buffer, would probably work, although it seems like a lot of overhead for a step that, if loadPixels and updatePixels were doing their jobs properly, wouldn’t be necessary.

I’ll give your suggestion a try and post back what I find out.

2 Likes

Hi

Try to use frameRate()

The default rate is 60 frames per second

And this size of screen size(1280, 720); takes more time for processing

That wouldn’t affect the frame rate of the camera, only the frequency of the calls to draw(). If TfGuy44 is correct, the camera is overwriting its own buffer on another thread. The striped sections of unprocessed video may well be the camera’s most recent overwrite (in which I case I am guessing the uneven distribution of “fresh” video is due to cache coherence issues between unsynchronized threads).

The approach he recommends (effectively, making a defensive copy before doing any processing) is working, at least to the extent that I don’t see the overwritten data appearing anymore. I suspect the overwrites are still there, but between any two frames from the camera, they are probably close enough to being the same image that it doesn’t matter.

If video.loadPixels() were working, none of this would be happening. But it appears the current version of the library has the camera writing directly to the pixels buffer on each frame. What’s still baffling about that is the the video doesn’t show up unless I call video.read(). Again, I would have thought that worked on videoframe by videoframe basis, which would still protect the pixels buffer from a race condition.

This is really a puzzler.

UPDATE: Ah, you meant Capture.frameRate. Yeah, I’ll try that and see what it does.

I sure could, but I’m only using the xor to make a quick change to every pixel, so I can debug this.

Hi

I have passed through close issue I made 3d scanner using processing one of the issues was , while processing captured image the code was analysis sometimes part of image in Random pattern I have added delay in processing function and framerate in setup then it works

Note that @TfGuy44 is an expert programmer try his explanation

I think I can partially confirm this. If you use the captureEvent() method to detect a new frame (instead of polling the available() method), it can run in the middle of your draw() method, on a different thread. Having draw() and captureEvent() print their thread IDs shows they are on different threads. So, synchronizing those methods makes the problem go away. You can either use “synchronized” on both methods, or use a volatile flag. Here’s my implementation of the second approach:

import processing.video.*;

Capture cam;

void setup()
{
  size(1280, 720);
  
  cam = new Capture(this, 1280, 720, "pipeline: autovideosrc");
  cam.start();
}

volatile boolean drawing;

void draw()
{
  drawing = true;
  
  for (int i = 0; i < cam.pixels.length; ++i)
  {
    cam.pixels[i] = 0x00FF00;
  }
  
  image(cam, 0, 0);

  drawing = false;
}

void captureEvent(Capture video)
{
  if (!drawing)
  {
    cam.read();
  }
  else
  {
    println("Blocked!", millis());
  }
}

The captureEvent() method reports that it is blocked every half-second or so, but the video remains a solid green. Taking out the volatile flag results in the video tearing every so often and showing bands of the camera’s field of view (same problem I originally had).

This doesn’t explain why polling the available() method still has this problem, though, My call from within my draw() method to the camera’s read() method is single-threaded. Looking at the read() source, it doesn’t start any threads of its own. Something else must be running on another thread that is still changing the pixels buffer, but it doesn’t do that when I use the synchronized captureEvent() method instead of polling available().

Before I write this up as a bug, does anyone know of an alternative to the standard video library for capturing video? It looks like no real work has been done to the standard library for about two years. Is there anything else people like for capturing video?

1 Like
/**
 * Synchronized Pixels Capture (1.0.0)
 * GoToLoop (2021-Oct-05)
 * Discourse.Processing.org/t/video-pixels-changing-while-i-process-them/32589/10
 */

import processing.video.Capture;

Capture cam;
PImage frame;

void setup() {
  size(640, 480);
  frame = get();

  cam = new Capture(this, width, height, "pipeline: autovideosrc");
  cam.start();
}

void draw() {
  surface.setTitle("Frame: " + frameCount + "   FPS: " + round(frameRate));

  synchronized (frame) {
    background(frame);
  }
}

void captureEvent(final Capture c) {
  c.read();

  final color[] pix = frame.pixels;
  final int len = pix.length;

  c.loadPixels();

  synchronized (frame) {
    arrayCopy(c.pixels, pix);

    for (int i = 0; i < len; pix[i++] ^= ~PImage.ALPHA_MASK);

    frame.updatePixels();
  }
}
1 Like

Depends if you can cope with using Processing in a different environment? In which case, PraxisLIVE. We actually maintain the underlying GStreamer Java integration that the Video library uses too - just, our integration between that and Processing works better! :smile:

I looked at your link, but the site doesn’t actually say what PraxisLIVE is. Is it an IDE, something I would use instead of Processing? I’m open to options. I teach media programming for a university. Some of my students get along with Processing pretty well, but others find it baffling. I’m okay with it to a point, but the way it tries to simplify Java creates issues of its own.

It’s an IDE and a runtime. The IDE is hybrid visual (nodes and code), where in video pipelines each node is effectively a live-codeable Processing sketch. Best bet might be to watch a little part of a video of my Write Now, Run Anytime talk on the page here, from about 11’20".

That looks very promising. Might be a good fit for my media students. Is there a tutorial series or other point-of-entry you’d recommend? For context, I’m an advanced coder with decades of experience in graphics and sound, and a lot of time spent with electronics. I’d be looking for something that helped me learn the unique features of PraxisLIVE, rather than a how-to-program-a-computer sort of thing.

Thanks for suggesting this.

1 Like

See how you get on with https://docs.praxislive.org/ Although more tutorials are needed! Feel free to message me here or follow up on our mailing list / chat. The IDE is built on top of Apache NetBeans (which I’m also involved in) so inherits quite a lot of useful IDE features from there.

Well, I’m rather biased - it’s my baby! :laughing: But having spent a lot of time working with GStreamer and Processing I find the current state of the Video library quite frustrating.

I’m with you there. I used it for one hour before discovering it had the problem we’ve been addressing here, and that the tutorial is obsolete. And there is simply no way I am going to be able to help my computational media students (many of whom are using Processing to write their first computer programs) understand multi-threaded synchronization.

See you over there.